Sample records for sequential monte-carlo based

  1. Propagating probability distributions of stand variables using sequential Monte Carlo methods

    Treesearch

    Jeffrey H. Gove

    2009-01-01

    A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...

  2. Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo

    NASA Astrophysics Data System (ADS)

    Schön, Thomas B.; Svensson, Andreas; Murray, Lawrence; Lindsten, Fredrik

    2018-05-01

    Probabilistic modeling provides the capability to represent and manipulate uncertainty in data, models, predictions and decisions. We are concerned with the problem of learning probabilistic models of dynamical systems from measured data. Specifically, we consider learning of probabilistic nonlinear state-space models. There is no closed-form solution available for this problem, implying that we are forced to use approximations. In this tutorial we will provide a self-contained introduction to one of the state-of-the-art methods-the particle Metropolis-Hastings algorithm-which has proven to offer a practical approximation. This is a Monte Carlo based method, where the particle filter is used to guide a Markov chain Monte Carlo method through the parameter space. One of the key merits of the particle Metropolis-Hastings algorithm is that it is guaranteed to converge to the "true solution" under mild assumptions, despite being based on a particle filter with only a finite number of particles. We will also provide a motivating numerical example illustrating the method using a modeling language tailored for sequential Monte Carlo methods. The intention of modeling languages of this kind is to open up the power of sophisticated Monte Carlo methods-including particle Metropolis-Hastings-to a large group of users without requiring them to know all the underlying mathematical details.

  3. Particle filters, a quasi-Monte-Carlo-solution for segmentation of coronaries.

    PubMed

    Florin, Charles; Paragios, Nikos; Williams, Jim

    2005-01-01

    In this paper we propose a Particle Filter-based approach for the segmentation of coronary arteries. To this end, successive planes of the vessel are modeled as unknown states of a sequential process. Such states consist of the orientation, position, shape model and appearance (in statistical terms) of the vessel that are recovered in an incremental fashion, using a sequential Bayesian filter (Particle Filter). In order to account for bifurcations and branchings, we consider a Monte Carlo sampling rule that propagates in parallel multiple hypotheses. Promising results on the segmentation of coronary arteries demonstrate the potential of the proposed approach.

  4. Estimation of parameters and basic reproduction ratio for Japanese encephalitis transmission in the Philippines using sequential Monte Carlo filter

    USDA-ARS?s Scientific Manuscript database

    We developed a sequential Monte Carlo filter to estimate the states and the parameters in a stochastic model of Japanese Encephalitis (JE) spread in the Philippines. This method is particularly important for its adaptability to the availability of new incidence data. This method can also capture the...

  5. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    NASA Astrophysics Data System (ADS)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  6. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods

    PubMed Central

    Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

    2011-01-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  7. Remarks on a financial inverse problem by means of Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Cuomo, Salvatore; Di Somma, Vittorio; Sica, Federica

    2017-10-01

    Estimating the price of a barrier option is a typical inverse problem. In this paper we present a numerical and statistical framework for a market with risk-free interest rate and a risk asset, described by a Geometric Brownian Motion (GBM). After approximating the risk asset with a numerical method, we find the final option price by following an approach based on sequential Monte Carlo methods. All theoretical results are applied to the case of an option whose underlying is a real stock.

  8. Online sequential Monte Carlo smoother for partially observed diffusion processes

    NASA Astrophysics Data System (ADS)

    Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain

    2018-12-01

    This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.

  9. Effective Online Bayesian Phylogenetics via Sequential Monte Carlo with Guided Proposals

    PubMed Central

    Fourment, Mathieu; Claywell, Brian C; Dinh, Vu; McCoy, Connor; Matsen IV, Frederick A; Darling, Aaron E

    2018-01-01

    Abstract Modern infectious disease outbreak surveillance produces continuous streams of sequence data which require phylogenetic analysis as data arrives. Current software packages for Bayesian phylogenetic inference are unable to quickly incorporate new sequences as they become available, making them less useful for dynamically unfolding evolutionary stories. This limitation can be addressed by applying a class of Bayesian statistical inference algorithms called sequential Monte Carlo (SMC) to conduct online inference, wherein new data can be continuously incorporated to update the estimate of the posterior probability distribution. In this article, we describe and evaluate several different online phylogenetic sequential Monte Carlo (OPSMC) algorithms. We show that proposing new phylogenies with a density similar to the Bayesian prior suffers from poor performance, and we develop “guided” proposals that better match the proposal density to the posterior. Furthermore, we show that the simplest guided proposals can exhibit pathological behavior in some situations, leading to poor results, and that the situation can be resolved by heating the proposal density. The results demonstrate that relative to the widely used MCMC-based algorithm implemented in MrBayes, the total time required to compute a series of phylogenetic posteriors as sequences arrive can be significantly reduced by the use of OPSMC, without incurring a significant loss in accuracy. PMID:29186587

  10. Sequential Monte Carlo tracking of the marginal artery by multiple cue fusion and random forest regression.

    PubMed

    Cherry, Kevin M; Peplinski, Brandon; Kim, Lauren; Wang, Shijun; Lu, Le; Zhang, Weidong; Liu, Jianfei; Wei, Zhuoshi; Summers, Ronald M

    2015-01-01

    Given the potential importance of marginal artery localization in automated registration in computed tomography colonography (CTC), we have devised a semi-automated method of marginal vessel detection employing sequential Monte Carlo tracking (also known as particle filtering tracking) by multiple cue fusion based on intensity, vesselness, organ detection, and minimum spanning tree information for poorly enhanced vessel segments. We then employed a random forest algorithm for intelligent cue fusion and decision making which achieved high sensitivity and robustness. After applying a vessel pruning procedure to the tracking results, we achieved statistically significantly improved precision compared to a baseline Hessian detection method (2.7% versus 75.2%, p<0.001). This method also showed statistically significantly improved recall rate compared to a 2-cue baseline method using fewer vessel cues (30.7% versus 67.7%, p<0.001). These results demonstrate that marginal artery localization on CTC is feasible by combining a discriminative classifier (i.e., random forest) with a sequential Monte Carlo tracking mechanism. In so doing, we present the effective application of an anatomical probability map to vessel pruning as well as a supplementary spatial coordinate system for colonic segmentation and registration when this task has been confounded by colon lumen collapse. Published by Elsevier B.V.

  11. Monte Carlo Simulation of Sudden Death Bearing Testing

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2003-01-01

    Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.

  12. Multilevel Sequential Monte Carlo Samplers for Normalizing Constants

    DOE PAGES

    Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...

    2017-08-24

    This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less

  13. DUAL STATE-PARAMETER UPDATING SCHEME ON A CONCEPTUAL HYDROLOGIC MODEL USING SEQUENTIAL MONTE CARLO FILTERS

    NASA Astrophysics Data System (ADS)

    Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin

    Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.

  14. Multilevel sequential Monte Carlo samplers

    DOE PAGES

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h 0>h 1 ...>h L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  15. Particle rejuvenation of Rao-Blackwellized sequential Monte Carlo smoothers for conditionally linear and Gaussian models

    NASA Astrophysics Data System (ADS)

    Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric

    2017-12-01

    This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.

  16. Structured filtering

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Wiebe, Nathan

    2017-08-01

    A major challenge facing existing sequential Monte Carlo methods for parameter estimation in physics stems from the inability of existing approaches to robustly deal with experiments that have different mechanisms that yield the results with equivalent probability. We address this problem here by proposing a form of particle filtering that clusters the particles that comprise the sequential Monte Carlo approximation to the posterior before applying a resampler. Through a new graphical approach to thinking about such models, we are able to devise an artificial-intelligence based strategy that automatically learns the shape and number of the clusters in the support of the posterior. We demonstrate the power of our approach by applying it to randomized gap estimation and a form of low circuit-depth phase estimation where existing methods from the physics literature either exhibit much worse performance or even fail completely.

  17. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE PAGES

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    2017-01-09

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  18. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  19. A comparison of Monte Carlo-based Bayesian parameter estimation methods for stochastic models of genetic networks

    PubMed Central

    Zaikin, Alexey; Míguez, Joaquín

    2017-01-01

    We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency. PMID:28797087

  20. Sequential Monte Carlo filter for state estimation of LiFePO4 batteries based on an online updated model

    NASA Astrophysics Data System (ADS)

    Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.

    2014-02-01

    Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.

  1. Prognostics of slurry pumps based on a moving-average wear degradation index and a general sequential Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tse, Peter W.

    2015-05-01

    Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.

  2. A new moving strategy for the sequential Monte Carlo approach in optimizing the hydrological model parameters

    NASA Astrophysics Data System (ADS)

    Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli

    2018-04-01

    Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.

  3. Analysis and Assessment of Operation Risk for Hybrid AC/DC Power System based on the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Hu, Xiaojing; Li, Qiang; Zhang, Hao; Guo, Ziming; Zhao, Kun; Li, Xinpeng

    2018-06-01

    Based on the Monte Carlo method, an improved risk assessment method for hybrid AC/DC power system with VSC station considering the operation status of generators, converter stations, AC lines and DC lines is proposed. According to the sequential AC/DC power flow algorithm, node voltage and line active power are solved, and then the operation risk indices of node voltage over-limit and line active power over-limit are calculated. Finally, an improved two-area IEEE RTS-96 system is taken as a case to analyze and assessment its operation risk. The results show that the proposed model and method can intuitively and directly reflect the weak nodes and weak lines of the system, which can provide some reference for the dispatching department.

  4. One-sided truncated sequential t-test: application to natural resource sampling

    Treesearch

    Gary W. Fowler; William G. O' Regan

    1974-01-01

    A new procedure for constructing one-sided truncated sequential t-tests and its application to natural resource sampling are described. Monte Carlo procedures were used to develop a series of one-sided truncated sequential t-tests and the associated approximations to the operating characteristic and average sample number functions. Different truncation points and...

  5. Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time

    NASA Astrophysics Data System (ADS)

    Urteaga, Iñigo; Bugallo, Mónica F.; Djurić, Petar M.

    2017-12-01

    We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t+1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.

  6. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    NASA Astrophysics Data System (ADS)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  7. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE PAGES

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    2016-01-01

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  8. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  9. Sequential Geoacoustic Filtering and Geoacoustic Inversion

    DTIC Science & Technology

    2015-09-30

    and online algorithms. We show here that CS obtains higher resolution than MVDR, even in scenarios, which favor classical high-resolution methods...windows actually performs better than conventional beamforming and MVDR/ MUSIC (see Figs. 1-2). Compressive geoacoustic inversion Geoacoustic...histograms based on 100 Monte Carlo simulations, and c)(CS, exhaustive-search, CBF, MVDR, and MUSIC performance versus SNR. The true source positions

  10. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  11. A Sequential Monte Carlo Approach for Streamflow Forecasting

    NASA Astrophysics Data System (ADS)

    Hsu, K.; Sorooshian, S.

    2008-12-01

    As alternatives to traditional physically-based models, Artificial Neural Network (ANN) models offer some advantages with respect to the flexibility of not requiring the precise quantitative mechanism of the process and the ability to train themselves from the data directly. In this study, an ANN model was used to generate one-day-ahead streamflow forecasts from the precipitation input over a catchment. Meanwhile, the ANN model parameters were trained using a Sequential Monte Carlo (SMC) approach, namely Regularized Particle Filter (RPF). The SMC approaches are known for their capabilities in tracking the states and parameters of a nonlinear dynamic process based on the Baye's rule and the proposed effective sampling and resampling strategies. In this study, five years of daily rainfall and streamflow measurement were used for model training. Variable sample sizes of RPF, from 200 to 2000, were tested. The results show that, after 1000 RPF samples, the simulation statistics, in terms of correlation coefficient, root mean square error, and bias, were stabilized. It is also shown that the forecasted daily flows fit the observations very well, with the correlation coefficient of higher than 0.95. The results of RPF simulations were also compared with those from the popular back-propagation ANN training approach. The pros and cons of using SMC approach and the traditional back-propagation approach will be discussed.

  12. Multilevel Mixture Kalman Filter

    NASA Astrophysics Data System (ADS)

    Guo, Dong; Wang, Xiaodong; Chen, Rong

    2004-12-01

    The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.

  13. Exploiting neurovascular coupling: a Bayesian sequential Monte Carlo approach applied to simulated EEG fNIRS data

    NASA Astrophysics Data System (ADS)

    Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria

    2017-08-01

    Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.

  14. Improving Computational Efficiency of Prediction in Model-based Prognostics Using the Unscented Transform

    DTIC Science & Technology

    2010-10-01

    bodies becomes greater as surface as- perities wear down (Hutchings, 1992). We characterize friction damage by a change in the friction coefficient...points are such a set, and satisfy an additional constraint in which the skew ( third moment) is minimized, which reduces the average error for a...On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and Computing, 10, 197–208. Hutchings, I. M. (1992). Tribology : friction

  15. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  16. PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.

    PubMed

    Xia, Jing; Wang, Michelle Yongmei

    Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.

  17. Growth of vertically aligned nanowires in metal-oxide nanocomposites: kinetic Monte-Carlo modeling versus experiments.

    PubMed

    Hennes, M; Schuler, V; Weng, X; Buchwald, J; Demaille, D; Zheng, Y; Vidal, F

    2018-04-26

    We employ kinetic Monte-Carlo simulations to study the growth process of metal-oxide nanocomposites obtained via sequential pulsed laser deposition. Using Ni-SrTiO3 (Ni-STO) as a model system, we reduce the complexity of the computational problem by choosing a coarse-grained approach mapping Sr, Ti and O atoms onto a single effective STO pseudo-atom species. With this ansatz, we scrutinize the kinetics of the sequential synthesis process, governed by alternating deposition and relaxation steps, and analyze the self-organization propensity of Ni atoms into straight vertically aligned nanowires embedded in the surrounding STO matrix. We finally compare the predictions of our binary toy model with experiments and demonstrate that our computational approach captures fundamental aspects of self-assembled nanowire synthesis. Despite its simplicity, our modeling strategy successfully describes the impact of relevant parameters like the concentration or laser frequency on the final nanoarchitecture of metal-oxide thin films grown via pulsed laser deposition.

  18. Numerical study on the sequential Bayesian approach for radioactive materials detection

    NASA Astrophysics Data System (ADS)

    Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng

    2013-01-01

    A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.

  19. Exact Tests for the Rasch Model via Sequential Importance Sampling

    ERIC Educational Resources Information Center

    Chen, Yuguo; Small, Dylan

    2005-01-01

    Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch's approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness…

  20. Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy

    NASA Astrophysics Data System (ADS)

    Sharma, Sanjib

    2017-08-01

    Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.

  1. Multiple point statistical simulation using uncertain (soft) conditional data

    NASA Astrophysics Data System (ADS)

    Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou

    2018-05-01

    Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.

  2. MR Imaging Based Treatment Planning for Radiotherapy of Prostate Cancer

    DTIC Science & Technology

    2007-02-01

    developed practical methods for heterogeneity correction for MRI - based dose calculations (Chen et al 2007). 6) We will use existing Monte Carlo ... Monte Carlo verification of IMRT dose distributions from a commercial treatment planning optimization system, Phys. Med. Biol., 45:2483-95 (2000) Ma...accuracy and consistency for MR based IMRT treatment planning for prostate cancer. A short paper entitled “ Monte Carlo dose verification of MR image based

  3. Sequential Markov chain Monte Carlo filter with simultaneous model selection for electrocardiogram signal modeling.

    PubMed

    Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia

    2012-01-01

    Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.

  4. Development of volume equations using data obtained by upper stem dendrometry with Monte Carlo integration: preliminary results for eastern redcedar

    Treesearch

    Thomas B. Lynch; Rodney E. Will; Rider Reynolds

    2013-01-01

    Preliminary results are given for development of an eastern redcedar (Juniperus virginiana) cubic-volume equation based on measurements of redcedar sample tree stem volume using dendrometry with Monte Carlo integration. Monte Carlo integration techniques can be used to provide unbiased estimates of stem cubic-foot volume based on upper stem diameter...

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jennings, E.; Madigan, M.

    Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrainmore » high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC« less

  6. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051

  7. New Approaches and Applications for Monte Carlo Perturbation Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan

    2017-02-01

    This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.

  8. Study of multi-dimensional radiative energy transfer in molecular gases

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, S. N.

    1993-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.

  9. A Monte Carlo Simulation of Brownian Motion in the Freshman Laboratory

    ERIC Educational Resources Information Center

    Anger, C. D.; Prescott, J. R.

    1970-01-01

    Describes a dry- lab" experiment for the college freshman laboratory, in which the essential features of Browian motion are given principles, using the Monte Carlo technique. Calculations principles, using the Monte Carlo technique. Calculations are carried out by a computation sheme based on computer language. Bibliography. (LC)

  10. Population Annealing Monte Carlo for Frustrated Systems

    NASA Astrophysics Data System (ADS)

    Amey, Christopher; Machta, Jonathan

    Population annealing is a sequential Monte Carlo algorithm that efficiently simulates equilibrium systems with rough free energy landscapes such as spin glasses and glassy fluids. A large population of configurations is initially thermalized at high temperature and then cooled to low temperature according to an annealing schedule. The population is kept in thermal equilibrium at every annealing step via resampling configurations according to their Boltzmann weights. Population annealing is comparable to parallel tempering in terms of efficiency, but has several distinct and useful features. In this talk I will give an introduction to population annealing and present recent progress in understanding its equilibration properties and optimizing it for spin glasses. Results from large-scale population annealing simulations for the Ising spin glass in 3D and 4D will be presented. NSF Grant DMR-1507506.

  11. Bayesian Treatment of Uncertainty in Environmental Modeling: Optimization, Sampling and Data Assimilation Using the DREAM Software Package

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2012-12-01

    In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.

  12. Spatial interpolation of forest conditions using co-conditional geostatistical simulation

    Treesearch

    H. Todd Mowrer

    2000-01-01

    In recent work the author used the geostatistical Monte Carlo technique of sequential Gaussian simulation (s.G.s.) to investigate uncertainty in a GIS analysis of potential old-growth forest areas. The current study compares this earlier technique to that of co-conditional simulation, wherein the spatial cross-correlations between variables are included. As in the...

  13. Solvent effects on the absorption spectrum and first hyperpolarizability of keto-enol tautomeric forms of anil derivatives: A Monte Carlo/quantum mechanics study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adriano Junior, L.; Fonseca, T. L.; Castro, M. A.

    2016-06-21

    Theoretical results for the absorption spectrum and electric properties of the enol and keto tautomeric forms of anil derivatives in the gas-phase and in solution are presented. The electronic properties in chloroform, acetonitrile, methanol, and water were determined by carrying out sequential Monte Carlo simulations and quantum mechanics calculations based on the time dependent density functional theory and on the second-order Møller–Plesset perturbation theory method. The results illustrate the role played by electrostatic interactions in the electronic properties of anil derivatives in a liquid environment. There is a significant increase of the dipole moment in solution (20%-100%) relative to themore » gas-phase value. Solvent effects are mild for the absorption spectrum and linear polarizability but they can be particularly important for first hyperpolarizability. A large first hyperpolarizability contrast between the enol and keto forms is observed when absorption spectra present intense lowest-energy absorption bands. Dynamic results for the first hyperpolarizability are in qualitative agreement with the available experimental results.« less

  14. A hybrid parallel framework for the cellular Potts model simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Yi; He, Kejing; Dong, Shoubin

    2009-01-01

    The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approachmore » achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).« less

  15. Monte Carlo simulation methodology for the reliabilty of aircraft structures under damage tolerance considerations

    NASA Astrophysics Data System (ADS)

    Rambalakos, Andreas

    Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serin, E.; Codel, G.; Mabhouti, H.

    Purpose: In small field geometries, the electronic equilibrium can be lost, making it challenging for the dose-calculation algorithm to accurately predict the dose, especially in the presence of tissue heterogeneities. In this study, dosimetric accuracy of Monte Carlo (MC) advanced dose calculation and sequential algorithms of Multiplan treatment planning system were investigated for small radiation fields incident on homogeneous and heterogeneous geometries. Methods: Small open fields of fixed cones of Cyberknife M6 unit 100 to 500 mm2 were used for this study. The fields were incident on in house phantom containing lung, air, and bone inhomogeneities and also homogeneous phantom.more » Using the same film batch, the net OD to dose calibration curve was obtained using CK with the 60 mm fixed cone by delivering 0- 800 cGy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. The dosimetric accuracy of MC and sequential algorithms in the presence of the inhomogeneities was compared against EBT3 film dosimetry Results: Open field tests in a homogeneous phantom showed good agreement between two algorithms and film measurement For MC algorithm, the minimum gamma analysis passing rates between measured and calculated dose distributions were 99.7% and 98.3% for homogeneous and inhomogeneous fields in the case of lung and bone respectively. For sequential algorithm, the minimum gamma analysis passing rates were 98.9% and 92.5% for for homogeneous and inhomogeneous fields respectively for used all cone sizes. In the case of the air heterogeneity, the differences were larger for both calculation algorithms. Overall, when compared to measurement, the MC had better agreement than sequential algorithm. Conclusion: The Monte Carlo calculation algorithm in the Multiplan treatment planning system is an improvement over the existing sequential algorithm. Dose discrepancies were observed for in the presence of air inhomogeneities.« less

  17. An empirical approach to estimate near-infra-red photon propagation and optically induced drug release in brain tissues

    NASA Astrophysics Data System (ADS)

    Prabhu Verleker, Akshay; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M.

    2015-03-01

    The purpose of this study is to develop an alternate empirical approach to estimate near-infra-red (NIR) photon propagation and quantify optically induced drug release in brain metastasis, without relying on computationally expensive Monte Carlo techniques (gold standard). Targeted drug delivery with optically induced drug release is a noninvasive means to treat cancers and metastasis. This study is part of a larger project to treat brain metastasis by delivering lapatinib-drug-nanocomplexes and activating NIR-induced drug release. The empirical model was developed using a weighted approach to estimate photon scattering in tissues and calibrated using a GPU based 3D Monte Carlo. The empirical model was developed and tested against Monte Carlo in optical brain phantoms for pencil beams (width 1mm) and broad beams (width 10mm). The empirical algorithm was tested against the Monte Carlo for different albedos along with diffusion equation and in simulated brain phantoms resembling white-matter (μs'=8.25mm-1, μa=0.005mm-1) and gray-matter (μs'=2.45mm-1, μa=0.035mm-1) at wavelength 800nm. The goodness of fit between the two models was determined using coefficient of determination (R-squared analysis). Preliminary results show the Empirical algorithm matches Monte Carlo simulated fluence over a wide range of albedo (0.7 to 0.99), while the diffusion equation fails for lower albedo. The photon fluence generated by empirical code matched the Monte Carlo in homogeneous phantoms (R2=0.99). While GPU based Monte Carlo achieved 300X acceleration compared to earlier CPU based models, the empirical code is 700X faster than the Monte Carlo for a typical super-Gaussian laser beam.

  18. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  19. Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry.

    PubMed

    Bostani, Maryam; Mueller, Jonathon W; McMillan, Kyle; Cody, Dianna D; Cagnon, Chris H; DeMarco, John J; McNitt-Gray, Michael F

    2015-02-01

    The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. The calculated mean percent difference between TLD measurements and Monte Carlo simulations was -4.9% with standard deviation of 8.7% and a range of -22.7% to 5.7%. The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.

  20. Massively parallelized Monte Carlo software to calculate the light propagation in arbitrarily shaped 3D turbid media

    NASA Astrophysics Data System (ADS)

    Zoller, Christian; Hohmann, Ansgar; Ertl, Thomas; Kienle, Alwin

    2017-07-01

    The Monte Carlo method is often referred as the gold standard to calculate the light propagation in turbid media [1]. Especially for complex shaped geometries where no analytical solutions are available the Monte Carlo method becomes very important [1, 2]. In this work a Monte Carlo software is presented, to simulate the light propagation in complex shaped geometries. To improve the simulation time the code is based on OpenCL such that graphics cards can be used as well as other computing devices. Within the software an illumination concept is presented to realize easily all kinds of light sources, like spatial frequency domain (SFD), optical fibers or Gaussian beam profiles. Moreover different objects, which are not connected to each other, can be considered simultaneously, without any additional preprocessing. This Monte Carlo software can be used for many applications. In this work the transmission spectrum of a tooth and the color reconstruction of a virtual object are shown, using results from the Monte Carlo software.

  1. How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.

    PubMed

    Lecca, Paola

    2018-01-01

    We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.

  2. SU-F-T-619: Dose Evaluation of Specific Patient Plans Based On Monte Carlo Algorithm for a CyberKnife Stereotactic Radiosurgery System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piao, J; PLA 302 Hospital, Beijing; Xu, S

    2016-06-15

    Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combinedmore » 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11275105). Thanks for the support from Accuray Corp.« less

  3. Dose calculation accuracy of the Monte Carlo algorithm for CyberKnife compared with other commercially available dose calculation algorithms.

    PubMed

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  4. Fixed forced detection for fast SPECT Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Cajgfinger, T.; Rit, S.; Létang, J. M.; Halty, A.; Sarrut, D.

    2018-03-01

    Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.

  5. Fixed forced detection for fast SPECT Monte-Carlo simulation.

    PubMed

    Cajgfinger, T; Rit, S; Létang, J M; Halty, A; Sarrut, D

    2018-03-02

    Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.

  6. Fission matrix-based Monte Carlo criticality analysis of fuel storage pools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farlotti, M.; Ecole Polytechnique, Palaiseau, F 91128; Larsen, E. W.

    2013-07-01

    Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simplemore » problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)« less

  7. Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.

    Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for allmore » exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.« less

  8. Recommender engine for continuous-time quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Huang, Li; Yang, Yi-feng; Wang, Lei

    2017-03-01

    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  9. Monte Carlo reference data sets for imaging research: Executive summary of the report of AAPM Research Committee Task Group 195.

    PubMed

    Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C

    2015-10-01

    The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his/her own novel research. In addition, an investigator entering the field of Monte Carlo simulations can use these descriptions and results as a self-teaching tool to ensure that he/she is able to perform a specific simulation correctly. Finally, educators can assign these cases as learning projects as part of course objectives or training programs.

  10. Monte Carlo-based Reconstruction in Water Cherenkov Detectors using Chroma

    NASA Astrophysics Data System (ADS)

    Seibert, Stanley; Latorre, Anthony

    2012-03-01

    We demonstrate the feasibility of event reconstruction---including position, direction, energy and particle identification---in water Cherenkov detectors with a purely Monte Carlo-based method. Using a fast optical Monte Carlo package we have written, called Chroma, in combination with several variance reduction techniques, we can estimate the value of a likelihood function for an arbitrary event hypothesis. The likelihood can then be maximized over the parameter space of interest using a form of gradient descent designed for stochastic functions. Although slower than more traditional reconstruction algorithms, this completely Monte Carlo-based technique is universal and can be applied to a detector of any size or shape, which is a major advantage during the design phase of an experiment. As a specific example, we focus on reconstruction results from a simulation of the 200 kiloton water Cherenkov far detector option for LBNE.

  11. Suppressing correlations in massively parallel simulations of lattice models

    NASA Astrophysics Data System (ADS)

    Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle

    2017-11-01

    For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.

  12. Feature aided Monte Carlo probabilistic data association filter for ballistic missile tracking

    NASA Astrophysics Data System (ADS)

    Ozdemir, Onur; Niu, Ruixin; Varshney, Pramod K.; Drozd, Andrew L.; Loe, Richard

    2011-05-01

    The problem of ballistic missile tracking in the presence of clutter is investigated. Probabilistic data association filter (PDAF) is utilized as the basic filtering algorithm. We propose to use sequential Monte Carlo methods, i.e., particle filters, aided with amplitude information (AI) in order to improve the tracking performance of a single target in clutter when severe nonlinearities exist in the system. We call this approach "Monte Carlo probabilistic data association filter with amplitude information (MCPDAF-AI)." Furthermore, we formulate a realistic problem in the sense that we use simulated radar cross section (RCS) data for a missile warhead and a cylinder chaff using Lucernhammer1, a state of the art electromagnetic signature prediction software, to model target and clutter amplitude returns as additional amplitude features which help to improve data association and tracking performance. A performance comparison is carried out between the extended Kalman filter (EKF) and the particle filter under various scenarios using single and multiple sensors. The results show that, when only one sensor is used, the MCPDAF performs significantly better than the EKF in terms of tracking accuracy under severe nonlinear conditions for ballistic missile tracking applications. However, when the number of sensors is increased, even under severe nonlinear conditions, the EKF performs as well as the MCPDAF.

  13. Surface entropy of liquids via a direct Monte Carlo approach - Application to liquid Si

    NASA Technical Reports Server (NTRS)

    Wang, Z. Q.; Stroud, D.

    1990-01-01

    Two methods are presented for a direct Monte Carlo evaluation of the surface entropy S(s) of a liquid interacting by specified, volume-independent potentials. The first method is based on an application of the approach of Ferrenberg and Swendsen (1988, 1989) to Monte Carlo simulations at two different temperatures; it gives much more reliable results for S(s) in liquid Si than previous calculations based on numerical differentiation. The second method expresses the surface entropy directly as a canonical average at fixed temperature.

  14. Accelerated event-by-event Monte Carlo microdosimetric calculations of electrons and protons tracks on a multi-core CPU and a CUDA-enabled GPU.

    PubMed

    Kalantzis, Georgios; Tachibana, Hidenobu

    2014-01-01

    For microdosimetric calculations event-by-event Monte Carlo (MC) methods are considered the most accurate. The main shortcoming of those methods is the extensive requirement for computational time. In this work we present an event-by-event MC code of low projectile energy electron and proton tracks for accelerated microdosimetric MC simulations on a graphic processing unit (GPU). Additionally, a hybrid implementation scheme was realized by employing OpenMP and CUDA in such a way that both GPU and multi-core CPU were utilized simultaneously. The two implementation schemes have been tested and compared with the sequential single threaded MC code on the CPU. Performance comparison was established on the speed-up for a set of benchmarking cases of electron and proton tracks. A maximum speedup of 67.2 was achieved for the GPU-based MC code, while a further improvement of the speedup up to 20% was achieved for the hybrid approach. The results indicate the capability of our CPU-GPU implementation for accelerated MC microdosimetric calculations of both electron and proton tracks without loss of accuracy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. On uncertainty quantification in hydrogeology and hydrogeophysics

    NASA Astrophysics Data System (ADS)

    Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud

    2017-12-01

    Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.

  16. A Monte Carlo method for the simulation of coagulation and nucleation based on weighted particles and the concepts of stochastic resolution and merging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.

    Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less

  17. Mosaicing of airborne LiDAR bathymetry strips based on Monte Carlo matching

    NASA Astrophysics Data System (ADS)

    Yang, Fanlin; Su, Dianpeng; Zhang, Kai; Ma, Yue; Wang, Mingwei; Yang, Anxiu

    2017-09-01

    This study proposes a new methodology for mosaicing airborne light detection and ranging (LiDAR) bathymetry (ALB) data based on Monte Carlo matching. Various errors occur in ALB data due to imperfect system integration and other interference factors. To account for these errors, a Monte Carlo matching algorithm based on a nonlinear least-squares adjustment model is proposed. First, the raw data of strip overlap areas were filtered according to their relative drift of depths. Second, a Monte Carlo model and nonlinear least-squares adjustment model were combined to obtain seven transformation parameters. Then, the multibeam bathymetric data were used to correct the initial strip during strip mosaicing. Finally, to evaluate the proposed method, the experimental results were compared with the results of the Iterative Closest Points (ICP) and three-dimensional Normal Distributions Transform (3D-NDT) algorithms. The results demonstrate that the algorithm proposed in this study is more robust and effective. When the quality of the raw data is poor, the Monte Carlo matching algorithm can still achieve centimeter-level accuracy for overlapping areas, which meets the accuracy of bathymetry required by IHO Standards for Hydrographic Surveys Special Publication No.44.

  18. An unbiased Hessian representation for Monte Carlo PDFs.

    PubMed

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan

    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.

  19. NOTE: Monte Carlo evaluation of kerma in an HDR brachytherapy bunker

    NASA Astrophysics Data System (ADS)

    Pérez-Calatayud, J.; Granero, D.; Ballester, F.; Casal, E.; Crispin, V.; Puchades, V.; León, A.; Verdú, G.

    2004-12-01

    In recent years, the use of high dose rate (HDR) after-loader machines has greatly increased due to the shift from traditional Cs-137/Ir-192 low dose rate (LDR) to HDR brachytherapy. The method used to calculate the required concrete and, where appropriate, lead shielding in the door is based on analytical methods provided by documents published by the ICRP, the IAEA and the NCRP. The purpose of this study is to perform a more realistic kerma evaluation at the entrance maze door of an HDR bunker using the Monte Carlo code GEANT4. The Monte Carlo results were validated experimentally. The spectrum at the maze entrance door, obtained with Monte Carlo, has an average energy of about 110 keV, maintaining a similar value along the length of the maze. The comparison of results from the aforementioned values with the Monte Carlo ones shows that results obtained using the albedo coefficient from the ICRP document more closely match those given by the Monte Carlo method, although the maximum value given by MC calculations is 30% greater.

  20. Monte Carlo Calculations of Polarized Microwave Radiation Emerging from Cloud Structures

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Roberti, Laura

    1998-01-01

    The last decade has seen tremendous growth in cloud dynamical and microphysical models that are able to simulate storms and storm systems with very high spatial resolution, typically of the order of a few kilometers. The fairly realistic distributions of cloud and hydrometeor properties that these models generate has in turn led to a renewed interest in the three-dimensional microwave radiative transfer modeling needed to understand the effect of cloud and rainfall inhomogeneities upon microwave observations. Monte Carlo methods, and particularly backwards Monte Carlo methods have shown themselves to be very desirable due to the quick convergence of the solutions. Unfortunately, backwards Monte Carlo methods are not well suited to treat polarized radiation. This study reviews the existing Monte Carlo methods and presents a new polarized Monte Carlo radiative transfer code. The code is based on a forward scheme but uses aliasing techniques to keep the computational requirements equivalent to the backwards solution. Radiative transfer computations have been performed using a microphysical-dynamical cloud model and the results are presented together with the algorithm description.

  1. Radiative transfer modelling inside thermal protection system using hybrid homogenization method for a backward Monte Carlo method coupled with Mie theory

    NASA Astrophysics Data System (ADS)

    Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.

    2012-06-01

    A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.

  2. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less

  3. Adaptive Sequential Monte Carlo for Multiple Changepoint Analysis

    DOE PAGES

    Heard, Nicholas A.; Turcotte, Melissa J. M.

    2016-05-21

    Process monitoring and control requires detection of structural changes in a data stream in real time. This paper introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be mademore » adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables re-balancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and non-conjugate Bayesian models for the intensity. Lastly, appendices to the article are available online, illustrating the method on other models and applications.« less

  4. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.

    2008-02-01

    Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.

  5. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics.

    PubMed

    Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2012-09-25

    Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.

  6. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics

    PubMed Central

    2012-01-01

    Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363

  7. Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.

    PubMed

    Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark

    2010-05-01

    We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.

  8. Particle tracking acceleration via signed distance fields in direct-accelerated geometry Monte Carlo

    DOE PAGES

    Shriwise, Patrick C.; Davis, Andrew; Jacobson, Lucas J.; ...

    2017-08-26

    Computer-aided design (CAD)-based Monte Carlo radiation transport is of value to the nuclear engineering community for its ability to conduct transport on high-fidelity models of nuclear systems, but it is more computationally expensive than native geometry representations. This work describes the adaptation of a rendering data structure, the signed distance field, as a geometric query tool for accelerating CAD-based transport in the direct-accelerated geometry Monte Carlo toolkit. Demonstrations of its effectiveness are shown for several problems. The beginnings of a predictive model for the data structure's utilization based on various problem parameters is also introduced.

  9. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  10. Joint Data Assimilation and Parameter Calibration in on-line groundwater modelling using Sequential Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Ramgraber, M.; Schirmer, M.

    2017-12-01

    As computational power grows and wireless sensor networks find their way into common practice, it becomes increasingly feasible to pursue on-line numerical groundwater modelling. The reconciliation of model predictions with sensor measurements often necessitates the application of Sequential Monte Carlo (SMC) techniques, most prominently represented by the Ensemble Kalman Filter. In the pursuit of on-line predictions it seems advantageous to transcend the scope of pure data assimilation and incorporate on-line parameter calibration as well. Unfortunately, the interplay between shifting model parameters and transient states is non-trivial. Several recent publications (e.g. Chopin et al., 2013, Kantas et al., 2015) in the field of statistics discuss potential algorithms addressing this issue. However, most of these are computationally intractable for on-line application. In this study, we investigate to what extent compromises between mathematical rigour and computational restrictions can be made within the framework of on-line numerical modelling of groundwater. Preliminary studies are conducted in a synthetic setting, with the goal of transferring the conclusions drawn into application in a real-world setting. To this end, a wireless sensor network has been established in the valley aquifer around Fehraltorf, characterized by a highly dynamic groundwater system and located about 20 km to the East of Zürich, Switzerland. By providing continuous probabilistic estimates of the state and parameter distribution, a steady base for branched-off predictive scenario modelling could be established, providing water authorities with advanced tools for assessing the impact of groundwater management practices. Chopin, N., Jacob, P.E. and Papaspiliopoulos, O. (2013): SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75 (3), p. 397-426. Kantas, N., Doucet, A., Singh, S.S., Maciejowski, J., and Chopin, N. (2015): On Particle Methods for Parameter Estimation in State-Space Models. Statistical Science, 30 (3), p. 328.-351.

  11. Simulation-Based Model Checking for Nondeterministic Systems and Rare Events

    DTIC Science & Technology

    2016-03-24

    year, we have investigated AO* search and Monte Carlo Tree Search algorithms to complement and enhance CMU’s SMCMDP. 1 Final Report, March 14... tree , so we can use it to find the probability of reachability for a property in PRISM’s Probabilistic LTL. By finding the maximum probability of...savings, particularly when handling very large models. 2.3 Monte Carlo Tree Search The Monte Carlo sampling process in SMCMDP can take a long time to

  12. Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits

    NASA Astrophysics Data System (ADS)

    Hoogland, Jiri; Kleiss, Ronald

    1997-04-01

    In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.

  13. Accelerate quasi Monte Carlo method for solving systems of linear algebraic equations through shared memory

    NASA Astrophysics Data System (ADS)

    Lai, Siyan; Xu, Ying; Shao, Bo; Guo, Menghan; Lin, Xiaola

    2017-04-01

    In this paper we study on Monte Carlo method for solving systems of linear algebraic equations (SLAE) based on shared memory. Former research demostrated that GPU can effectively speed up the computations of this issue. Our purpose is to optimize Monte Carlo method simulation on GPUmemoryachritecture specifically. Random numbers are organized to storein shared memory, which aims to accelerate the parallel algorithm. Bank conflicts can be avoided by our Collaborative Thread Arrays(CTA)scheme. The results of experiments show that the shared memory based strategy can speed up the computaions over than 3X at most.

  14. Monte Carlo-based searching as a tool to study carbohydrate structure

    USDA-ARS?s Scientific Manuscript database

    A torsion angle-based Monte-Carlo searching routine was developed and applied to several carbohydrate modeling problems. The routine was developed as a Unix shell script that calls several programs, which allows it to be interfaced with multiple potential functions and various functions for evaluat...

  15. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Degroh, Kim K.; Sechkar, Edward A.

    1992-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) will assist in understanding the mechanisms involved, and will lead to improved reliability in predicting in-space durability of materials based on ground laboratory testing. A computational simulation of atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of assumed mechanistic behavior of atomic oxygen and results of both ground laboratory and LDEF data, a predictive Monte Carlo model was developed which simulates the oxidation processes that occur on polymers with applied protective coatings that have defects. The use of high atomic oxygen fluence-directed ram LDEF results has enabled mechanistic implications to be made by adjusting Monte Carlo modeling assumptions to match observed results based on scanning electron microscopy. Modeling assumptions, implications, and predictions are presented, along with comparison of observed ground laboratory and LDEF results.

  16. pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    White, J.; Brakefield, L. K.

    2015-12-01

    The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.

  17. WE-AB-204-11: Development of a Nuclear Medicine Dosimetry Module for the GPU-Based Monte Carlo Code ARCHER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T; Lin, H; Xu, X

    Purpose: To develop a nuclear medicine dosimetry module for the GPU-based Monte Carlo code ARCHER. Methods: We have developed a nuclear medicine dosimetry module for the fast Monte Carlo code ARCHER. The coupled electron-photon Monte Carlo transport kernel included in ARCHER is built upon the Dose Planning Method code (DPM). The developed module manages the radioactive decay simulation by consecutively tracking several types of radiation on a per disintegration basis using the statistical sampling method. Optimization techniques such as persistent threads and prefetching are studied and implemented. The developed module is verified against the VIDA code, which is based onmore » Geant4 toolkit and has previously been verified against OLINDA/EXM. A voxelized geometry is used in the preliminary test: a sphere made of ICRP soft tissue is surrounded by a box filled with water. Uniform activity distribution of I-131 is assumed in the sphere. Results: The self-absorption dose factors (mGy/MBqs) of the sphere with varying diameters are calculated by ARCHER and VIDA respectively. ARCHER’s result is in agreement with VIDA’s that are obtained from a previous publication. VIDA takes hours of CPU time to finish the computation, while it takes ARCHER 4.31 seconds for the 12.4-cm uniform activity sphere case. For a fairer CPU-GPU comparison, more effort will be made to eliminate the algorithmic differences. Conclusion: The coupled electron-photon Monte Carlo code ARCHER has been extended to radioactive decay simulation for nuclear medicine dosimetry. The developed code exhibits good performance in our preliminary test. The GPU-based Monte Carlo code is developed with grant support from the National Institute of Biomedical Imaging and Bioengineering through an R01 grant (R01EB015478)« less

  18. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    PubMed

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  19. Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo

    NASA Astrophysics Data System (ADS)

    Khosravi, Ebrahim

    1998-12-01

    This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.

  20. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    NASA Astrophysics Data System (ADS)

    Ritsch, E.; Atlas Collaboration

    2014-06-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  1. GE781: a Monte Carlo package for fixed target experiments

    NASA Astrophysics Data System (ADS)

    Davidenko, G.; Funk, M. A.; Kim, V.; Kuropatkin, N.; Kurshetsov, V.; Molchanov, V.; Rud, S.; Stutte, L.; Verebryusov, V.; Zukanovich Funchal, R.

    The Monte Carlo package for the fixed target experiment B781 at Fermilab, a third generation charmed baryon experiment, is described. This package is based on GEANT 3.21, ADAMO database and DAFT input/output routines.

  2. SU-E-T-481: Dosimetric Effects of Tissue Heterogeneity in Proton Therapy: Monte Carlo Simulation and Experimental Study Using Animal Tissue Phantoms.

    PubMed

    Liu, Y; Zheng, Y

    2012-06-01

    Accurate determination of proton dosimetric effect for tissue heterogeneity is critical in proton therapy. Proton beams have finite range and consequently tissue heterogeneity plays a more critical role in proton therapy. The purpose of this study is to investigate the tissue heterogeneity effect in proton dosimetry based on anatomical-based Monte Carlo simulation using animal tissues. Animal tissues including a pig head and beef bulk were used in this study. Both pig head and beef were scanned using a GE CT scanner with 1.25 mm slice thickness. A treatment plan was created, using the CMS XiO treatment planning system (TPS) with a single proton spread-out-Bragg-peak beam (SOBP). Radiochromic films were placed at the distal falloff region. Image guidance was used to align the phantom before proton beams were delivered according to the treatment plan. The same two CT sets were converted to Monte Carlo simulation model. The Monte Carlo simulated dose calculations with/without tissue omposition were compared to TPS calculations and measurements. Based on the preliminary comparison, at the center of SOBP plane, the Monte Carlo simulation dose without tissue composition agreed generally well with TPS calculation. In the distal falloff region, the dose difference was large, and about 2 mm isodose line shift was observed with the consideration of tissue composition. The detailed comparison of dose distributions between Monte Carlo simulation, TPS calculations and measurements is underway. Accurate proton dose calculations are challenging in proton treatment planning for heterogeneous tissues. Tissue heterogeneity and tissue composition may lead to isodose line shifts up to a few millimeters in the distal falloff region. By simulating detailed particle transport and energy deposition, Monte Carlo simulations provide a verification method in proton dose calculation where inhomogeneous tissues are present. © 2012 American Association of Physicists in Medicine.

  3. Monte Carlo verification of radiotherapy treatments with CloudMC.

    PubMed

    Miras, Hector; Jiménez, Rubén; Perales, Álvaro; Terrón, José Antonio; Bertolet, Alejandro; Ortiz, Antonio; Macías, José

    2018-06-27

    A new implementation has been made on CloudMC, a cloud-based platform presented in a previous work, in order to provide services for radiotherapy treatment verification by means of Monte Carlo in a fast, easy and economical way. A description of the architecture of the application and the new developments implemented is presented together with the results of the tests carried out to validate its performance. CloudMC has been developed over Microsoft Azure cloud. It is based on a map/reduce implementation for Monte Carlo calculations distribution over a dynamic cluster of virtual machines in order to reduce calculation time. CloudMC has been updated with new methods to read and process the information related to radiotherapy treatment verification: CT image set, treatment plan, structures and dose distribution files in DICOM format. Some tests have been designed in order to determine, for the different tasks, the most suitable type of virtual machines from those available in Azure. Finally, the performance of Monte Carlo verification in CloudMC is studied through three real cases that involve different treatment techniques, linac models and Monte Carlo codes. Considering computational and economic factors, D1_v2 and G1 virtual machines were selected as the default type for the Worker Roles and the Reducer Role respectively. Calculation times up to 33 min and costs of 16 € were achieved for the verification cases presented when a statistical uncertainty below 2% (2σ) was required. The costs were reduced to 3-6 € when uncertainty requirements are relaxed to 4%. Advantages like high computational power, scalability, easy access and pay-per-usage model, make Monte Carlo cloud-based solutions, like the one presented in this work, an important step forward to solve the long-lived problem of truly introducing the Monte Carlo algorithms in the daily routine of the radiotherapy planning process.

  4. Collision of Physics and Software in the Monte Carlo Application Toolkit (MCATK)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sweezy, Jeremy Ed

    2016-01-21

    The topic is presented in a series of slides organized as follows: MCATK overview, development strategy, available algorithms, problem modeling (sources, geometry, data, tallies), parallelism, miscellaneous tools/features, example MCATK application, recent areas of research, and summary and future work. MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library with continuous energy neutron and photon transport. Designed to build specialized applications and to provide new functionality in existing general-purpose Monte Carlo codes like MCNP, it reads ACE formatted nuclear data generated by NJOY. The motivation behind MCATK was to reduce costs. MCATK physics involves continuous energy neutron & gammamore » transport with multi-temperature treatment, static eigenvalue (k eff and α) algorithms, time-dependent algorithm, and fission chain algorithms. MCATK geometry includes mesh geometries and solid body geometries. MCATK provides verified, unit-test Monte Carlo components, flexibility in Monte Carlo application development, and numerous tools such as geometry and cross section plotters.« less

  5. Asteroid mass estimation with Markov-chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Siltala, L.; Granvik, M.

    2017-09-01

    We have developed a new Markov-chain Monte Carlo-based algorithm for asteroid mass estimation based on mutual encounters and tested it for several different asteroids. Our results are in line with previous literature values but suggest that uncertainties of prior estimates may be misleading as a consequence of using linearized methods.

  6. QUANTIFYING AGGREGATE CHLORPYRIFOS EXPOSURE AND DOSE TO CHILDREN USING A PHYSICALLY-BASED TWO-STAGE MONTE CARLO PROBABILISTIC MODEL

    EPA Science Inventory

    To help address the Food Quality Protection Act of 1996, a physically-based, two-stage Monte Carlo probabilistic model has been developed to quantify and analyze aggregate exposure and dose to pesticides via multiple routes and pathways. To illustrate model capabilities and ide...

  7. Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan

    2018-02-01

    Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.

  8. Accurate quantification of fluorescent targets within turbid media based on a decoupled fluorescence Monte Carlo model.

    PubMed

    Deng, Yong; Luo, Zhaoyang; Jiang, Xu; Xie, Wenhao; Luo, Qingming

    2015-07-01

    We propose a method based on a decoupled fluorescence Monte Carlo model for constructing fluorescence Jacobians to enable accurate quantification of fluorescence targets within turbid media. The effectiveness of the proposed method is validated using two cylindrical phantoms enclosing fluorescent targets within homogeneous and heterogeneous background media. The results demonstrate that our method can recover relative concentrations of the fluorescent targets with higher accuracy than the perturbation fluorescence Monte Carlo method. This suggests that our method is suitable for quantitative fluorescence diffuse optical tomography, especially for in vivo imaging of fluorophore targets for diagnosis of different diseases and abnormalities.

  9. Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons

    PubMed Central

    Cemgil, Ali Taylan

    2017-01-01

    We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking. PMID:29109375

  10. Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons.

    PubMed

    Daniş, F Serhan; Cemgil, Ali Taylan

    2017-10-29

    We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking.

  11. Evaluation of an analytic linear Boltzmann transport equation solver for high-density inhomogeneities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lloyd, S. A. M.; Ansbacher, W.; Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6

    2013-01-15

    Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are usedmore » to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements support the dose perturbations demonstrated by Monte Carlo and Acuros XB data. Conclusions: Acuros XB is shown to perform as well as Monte Carlo methods and better than existing clinical algorithms for dose calculations involving high-density volumes.« less

  12. The X-43A Six Degree of Freedom Monte Carlo Analysis

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger

    2008-01-01

    This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.

  13. The X-43A Six Degree of Freedom Monte Carlo Analysis

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael

    2007-01-01

    This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.

  14. Landslide Failure Likelihoods Estimated Through Analysis of Suspended Sediment and Streamflow Time Series Data

    NASA Astrophysics Data System (ADS)

    Stark, C. P.; Rudd, S.; Lall, U.; Hovius, N.; Dadson, S.; Chen, M.-C.

    Off-Axis DOAS measurements with non-artificial scattered light, based upon the renowned DOAS technique, allow to optimize the sensitivity of the technique for the trace gas profile in question by strongly increasing the light's path through the relevant atmosphere layers. Multi-Axis-(MAX) DOAS probe several directions simultaneously or sequentially to increase the spatial resolution. Several devices (ground based, air- borne and ship-built) are operated by our group in the framework of the SCIAMACHY validation. Radiative transfer models are an essential requirement for the interpretation of these measurements and their conversion into detailed profile data. Apart from some existing Monte Carlo Models most codes use analytical algorithms to solve the radia- tive transfer equation for given atmospheric conditions. For specific circumstances, e.g. photon scattering within clouds, these approaches are not efficient enough to pro- vide sufficient accuracy. Also horizontal gradients in atmospheric parameters have to be taken into account. To meet the needs of measurement situations for all kinds of scattered light DOAS platforms, a three dimensional full spherical Monte Carlo model was devised. Here we present Air Mass Factors (AMF) to calculate vertical column densities (VCD) from measured slant column densities (SCD). Sensitivity studies on the influence of the wavelength and telescope direction used, of the altitude of profile layers, albedo, refraction and basic aerosols are shown. Also modelled intensity series are compared with radiometer data.

  15. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, A; Zbijewski, W; Bolch, W

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less

  16. Monte Carlo simulation: Its status and future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murtha, J.A.

    1997-04-01

    Monte Carlo simulation is a statistics-based analysis tool that yields probability-vs.-value relationships for key parameters, including oil and gas reserves, capital exposure, and various economic yardsticks, such as net present value (NPV) and return on investment (ROI). Monte Carlo simulation is a part of risk analysis and is sometimes performed in conjunction with or as an alternative to decision [tree] analysis. The objectives are (1) to define Monte Carlo simulation in a more general context of risk and decision analysis; (2) to provide some specific applications, which can be interrelated; (3) to respond to some of the criticisms; (4) tomore » offer some cautions about abuses of the method and recommend how to avoid the pitfalls; and (5) to predict what the future has in store.« less

  17. Monte Carlo Simulations of Microchannel Plate Based, Fast-Gated X-Ray Imagers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu., M., Kruschwitz, C.

    2011-02-01

    This is a chapter in a book titled Applications of Monte Carlo Method in Science and Engineering Edited by: Shaul Mordechai ISBN 978-953-307-691-1, Hard cover, 950 pages Publisher: InTech Publication date: February 2011

  18. Monte Carlo calculations of the impact of a hip prosthesis on the dose distribution

    NASA Astrophysics Data System (ADS)

    Buffard, Edwige; Gschwind, Régine; Makovicka, Libor; David, Céline

    2006-09-01

    Because of the ageing of the population, an increasing number of patients with hip prostheses are undergoing pelvic irradiation. Treatment planning systems (TPS) currently available are not always able to accurately predict the dose distribution around such implants. In fact, only Monte Carlo simulation has the ability to precisely calculate the impact of a hip prosthesis during radiotherapeutic treatment. Monte Carlo phantoms were developed to evaluate the dose perturbations during pelvic irradiation. A first model, constructed with the DOSXYZnrc usercode, was elaborated to determine the dose increase at the tissue-metal interface as well as the impact of the material coating the prosthesis. Next, CT-based phantoms were prepared, using the usercode CTCreate, to estimate the influence of the geometry and the composition of such implants on the beam attenuation. Thanks to a program that we developed, the study was carried out with CT-based phantoms containing a hip prosthesis without metal artefacts. Therefore, anthropomorphic phantoms allowed better definition of both patient anatomy and the hip prosthesis in order to better reproduce the clinical conditions of pelvic irradiation. The Monte Carlo results revealed the impact of certain coatings such as PMMA on dose enhancement at the tissue-metal interface. Monte Carlo calculations in CT-based phantoms highlighted the marked influence of the implant's composition, its geometry as well as its position within the beam on dose distribution.

  19. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  20. SABRINA - An interactive geometry modeler for MCNP (Monte Carlo Neutron Photon)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, J.T.; Murphy, J.

    SABRINA is an interactive three-dimensional geometry modeler developed to produce complicated models for the Los Alamos Monte Carlo Neutron Photon program MCNP. SABRINA produces line drawings and color-shaded drawings for a wide variety of interactive graphics terminals. It is used as a geometry preprocessor in model development and as a Monte Carlo particle-track postprocessor in the visualization of complicated particle transport problem. SABRINA is written in Fortran 77 and is based on the Los Alamos Common Graphics System, CGS. 5 refs., 2 figs.

  1. A Dasymetric-Based Monte Carlo Simulation Approach to the Probabilistic Analysis of Spatial Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Piburn, Jesse O; McManamay, Ryan A

    2017-01-01

    Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.

  2. Metis: A Pure Metropolis Markov Chain Monte Carlo Bayesian Inference Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bates, Cameron Russell; Mckigney, Edward Allen

    The use of Bayesian inference in data analysis has become the standard for large scienti c experiments [1, 2]. The Monte Carlo Codes Group(XCP-3) at Los Alamos has developed a simple set of algorithms currently implemented in C++ and Python to easily perform at-prior Markov Chain Monte Carlo Bayesian inference with pure Metropolis sampling. These implementations are designed to be user friendly and extensible for customization based on speci c application requirements. This document describes the algorithmic choices made and presents two use cases.

  3. Radial-based tail methods for Monte Carlo simulations of cylindrical interfaces

    NASA Astrophysics Data System (ADS)

    Goujon, Florent; Bêche, Bruno; Malfreyt, Patrice; Ghoufi, Aziz

    2018-03-01

    In this work, we implement for the first time the radial-based tail methods for Monte Carlo simulations of cylindrical interfaces. The efficiency of this method is then evaluated through the calculation of surface tension and coexisting properties. We show that the inclusion of tail corrections during the course of the Monte Carlo simulation impacts the coexisting and the interfacial properties. We establish that the long range corrections to the surface tension are the same order of magnitude as those obtained from planar interface. We show that the slab-based tail method does not amend the localization of the Gibbs equimolar dividing surface. Additionally, a non-monotonic behavior of surface tension is exhibited as a function of the radius of the equimolar dividing surface.

  4. Sequential Monte Carlo for Maximum Weight Subgraphs with Application to Solving Image Jigsaw Puzzles.

    PubMed

    Adluru, Nagesh; Yang, Xingwei; Latecki, Longin Jan

    2015-05-01

    We consider a problem of finding maximum weight subgraphs (MWS) that satisfy hard constraints in a weighted graph. The constraints specify the graph nodes that must belong to the solution as well as mutual exclusions of graph nodes, i.e., pairs of nodes that cannot belong to the same solution. Our main contribution is a novel inference approach for solving this problem in a sequential monte carlo (SMC) sampling framework. Usually in an SMC framework there is a natural ordering of the states of the samples. The order typically depends on observations about the states or on the annealing setup used. In many applications (e.g., image jigsaw puzzle problems), all observations (e.g., puzzle pieces) are given at once and it is hard to define a natural ordering. Therefore, we relax the assumption of having ordered observations about states and propose a novel SMC algorithm for obtaining maximum a posteriori estimate of a high-dimensional posterior distribution. This is achieved by exploring different orders of states and selecting the most informative permutations in each step of the sampling. Our experimental results demonstrate that the proposed inference framework significantly outperforms loopy belief propagation in solving the image jigsaw puzzle problem. In particular, our inference quadruples the accuracy of the puzzle assembly compared to that of loopy belief propagation.

  5. Sequential Monte Carlo for Maximum Weight Subgraphs with Application to Solving Image Jigsaw Puzzles

    PubMed Central

    Adluru, Nagesh; Yang, Xingwei; Latecki, Longin Jan

    2015-01-01

    We consider a problem of finding maximum weight subgraphs (MWS) that satisfy hard constraints in a weighted graph. The constraints specify the graph nodes that must belong to the solution as well as mutual exclusions of graph nodes, i.e., pairs of nodes that cannot belong to the same solution. Our main contribution is a novel inference approach for solving this problem in a sequential monte carlo (SMC) sampling framework. Usually in an SMC framework there is a natural ordering of the states of the samples. The order typically depends on observations about the states or on the annealing setup used. In many applications (e.g., image jigsaw puzzle problems), all observations (e.g., puzzle pieces) are given at once and it is hard to define a natural ordering. Therefore, we relax the assumption of having ordered observations about states and propose a novel SMC algorithm for obtaining maximum a posteriori estimate of a high-dimensional posterior distribution. This is achieved by exploring different orders of states and selecting the most informative permutations in each step of the sampling. Our experimental results demonstrate that the proposed inference framework significantly outperforms loopy belief propagation in solving the image jigsaw puzzle problem. In particular, our inference quadruples the accuracy of the puzzle assembly compared to that of loopy belief propagation. PMID:26052182

  6. Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods

    NASA Astrophysics Data System (ADS)

    Lai, Bo-Lun; Sheu, Rong-Jiun

    2017-09-01

    Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.

  7. ME(SSY)**2: Monte Carlo Code for Star Cluster Simulations

    NASA Astrophysics Data System (ADS)

    Freitag, Marc Dewi

    2013-02-01

    ME(SSY)**2 stands for “Monte-carlo Experiments with Spherically SYmmetric Stellar SYstems." This code simulates the long term evolution of spherical clusters of stars; it was devised specifically to treat dense galactic nuclei. It is based on the pioneering Monte Carlo scheme proposed by Hénon in the 70's and includes all relevant physical ingredients (2-body relaxation, stellar mass spectrum, collisions, tidal disruption, ldots). It is basically a Monte Carlo resolution of the Fokker-Planck equation. It can cope with any stellar mass spectrum or velocity distribution. Being a particle-based method, it also allows one to take stellar collisions into account in a very realistic way. This unique code, featuring most important physical processes, allows million particle simulations, spanning a Hubble time, in a few CPU days on standard personal computers and provides a wealth of data only rivalized by N-body simulations. The current version of the software requires the use of routines from the "Numerical Recipes in Fortran 77" (http://www.nrbook.com/a/bookfpdf.php).

  8. Monte-Carlo-based phase retardation estimator for polarization sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki

    2011-08-01

    A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.

  9. Calculation of absorbed fractions to human skeletal tissues due to alpha particles using the Monte Carlo and 3-D chord-based transport techniques.

    PubMed

    Hunt, J G; Watchman, C J; Bolch, W E

    2007-01-01

    Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D microCT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo--VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques.

  10. A collision history-based approach to Sensitivity/Perturbation calculations in the continuous energy Monte Carlo code SERPENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giuseppe Palmiotti

    In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.

  11. Particle-Based Simulations of Microscopic Thermal Properties of Confined Systems

    DTIC Science & Technology

    2014-11-01

    velocity versus electric field in gallium arsenide (GaAs) computed with the original CMC table structure (squares) at temperature T=150K, and the new...computer-aided design Cellular Monte Carlo Ensemble Monte Carlo gallium arsenide Heat Transport Equation DARPA Defense Advanced Research Projects

  12. Physical Principle for Generation of Randomness

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2009-01-01

    A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)

  13. A High-Order Low-Order Algorithm with Exponentially Convergent Monte Carlo for Thermal Radiative Transfer

    DOE PAGES

    Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.

    2016-10-21

    In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less

  14. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  15. Path integral Monte Carlo ground state approach: formalism, implementation, and applications

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2017-11-01

    Monte Carlo techniques have played an important role in understanding strongly correlated systems across many areas of physics, covering a wide range of energy and length scales. Among the many Monte Carlo methods applicable to quantum mechanical systems, the path integral Monte Carlo approach with its variants has been employed widely. Since semi-classical or classical approaches will not be discussed in this review, path integral based approaches can for our purposes be divided into two categories: approaches applicable to quantum mechanical systems at zero temperature and approaches applicable to quantum mechanical systems at finite temperature. While these two approaches are related to each other, the underlying formulation and aspects of the algorithm differ. This paper reviews the path integral Monte Carlo ground state (PIGS) approach, which solves the time-independent Schrödinger equation. Specifically, the PIGS approach allows for the determination of expectation values with respect to eigen states of the few- or many-body Schrödinger equation provided the system Hamiltonian is known. The theoretical framework behind the PIGS algorithm, implementation details, and sample applications for fermionic systems are presented.

  16. Monte Carlo treatment planning with modulated electron radiotherapy: framework development and application

    NASA Astrophysics Data System (ADS)

    Alexander, Andrew William

    Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.

  17. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  18. Validation of a personalized dosimetric evaluation tool (Oedipe) for targeted radiotherapy based on the Monte Carlo MCNPX code

    NASA Astrophysics Data System (ADS)

    Chiavassa, S.; Aubineau-Lanièce, I.; Bitar, A.; Lisbona, A.; Barbet, J.; Franck, D.; Jourdain, J. R.; Bardiès, M.

    2006-02-01

    Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy.

  19. Calculating Remote Sensing Reflectance Uncertainties Using an Instrument Model Propagated Through Atmospheric Correction via Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Karakoylu, E.; Franz, B.

    2016-01-01

    First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.

  20. Free energy computations by minimization of Kullback-Leibler divergence: An efficient adaptive biasing potential method for sparse representations

    NASA Astrophysics Data System (ADS)

    Bilionis, I.; Koutsourelakis, P. S.

    2012-05-01

    The present paper proposes an adaptive biasing potential technique for the computation of free energy landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy function, under the same objective of minimizing the Kullback-Leibler divergence between appropriately selected densities. It offers rigorous convergence diagnostics even though history dependent, non-Markovian dynamics are employed. It makes use of a greedy optimization scheme in order to obtain sparse representations of the free energy function which can be particularly useful in multidimensional cases. It employs embarrassingly parallelizable sampling schemes that are based on adaptive Sequential Monte Carlo and can be readily coupled with legacy molecular dynamics simulators. The sequential nature of the learning and sampling scheme enables the efficient calculation of free energy functions parametrized by the temperature. The characteristics and capabilities of the proposed method are demonstrated in three numerical examples.

  1. MC3: Multi-core Markov-chain Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan

    2016-10-01

    MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.

  2. The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sellier, J.M., E-mail: jeanmichel.sellier@parallel.bas.bg; Dimov, I.

    2014-09-15

    The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practicallymore » unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.« less

  3. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  4. (U) Introduction to Monte Carlo Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hungerford, Aimee L.

    2017-03-20

    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  5. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Ellis; Derek Gaston; Benoit Forget

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less

  6. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  7. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  8. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  9. Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Pohlmann, K.

    2016-12-01

    Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.

  10. Off-diagonal expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  11. Light propagation along the pericardium meridian at human wrist as evidenced by the optical experiment and Monte Carlo method.

    PubMed

    Jiang, Yi-fan; Chen, Chang-shui; Liu, Xiao-mei; Liu, Rong-ting; Liu, Song-hao

    2015-04-01

    To explore the characteristics of light propagation along the Pericardium Meridian and its surrounding areas at human wrist by using optical experiment and Monte Carlo method. An experiment was carried out to obtain the distribution of diffuse light on Pericardium Meridian line and its surrounding areas at the wrist, and then a simplified model based on the anatomical structure was proposed to simulate the light transportation within the same area by using Monte Carlo method. The experimental results showed strong accordance with the Monte Carlo simulation that the light propagation along the Pericardium Meridian had an advantage over its surrounding areas at the wrist. The advantage of light transport along Pericardium Merdian line was related to components and structure of tissue, also the anatomical structure of the area that the Pericardium Meridian line runs.

  12. Monte Carlo derivation of filtered tungsten anode X-ray spectra for dose computation in digital mammography.

    PubMed

    Paixão, Lucas; Oliveira, Bruno Beraldo; Viloria, Carolina; de Oliveira, Marcio Alves; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2015-01-01

    Derive filtered tungsten X-ray spectra used in digital mammography systems by means of Monte Carlo simulations. Filtered spectra for rhodium filter were obtained for tube potentials between 26 and 32 kV. The half-value layer (HVL) of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F & MAM Detector Platinum and 8201023-C Xi Base unit Platinum Plus w mAs in a Hologic Selenia Dimensions system using a direct radiography mode. Calculated HVL values showed good agreement as compared with those obtained experimentally. The greatest relative difference between the Monte Carlo calculated HVL values and experimental HVL values was 4%. The results show that the filtered tungsten anode X-ray spectra and the EGSnrc Monte Carlo code can be used for mean glandular dose determination in mammography.

  13. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  14. Markov chain Monte Carlo techniques applied to parton distribution functions determination: Proof of concept

    NASA Astrophysics Data System (ADS)

    Gbedo, Yémalin Gabin; Mangin-Brinet, Mariane

    2017-07-01

    We present a new procedure to determine parton distribution functions (PDFs), based on Markov chain Monte Carlo (MCMC) methods. The aim of this paper is to show that we can replace the standard χ2 minimization by procedures grounded on statistical methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination. After a basic introduction to these techniques, we introduce the algorithm we have chosen to implement—namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for Lattice QCD, turns out to be very interesting when applied to PDFs determination by global analyses; we show that it allows us to circumvent the difficulties due to the high dimensionality of the problem, in particular concerning the acceptance. A first feasibility study is performed and presented, which indicates that Markov chain Monte Carlo can successfully be applied to the extraction of PDFs and of their uncertainties.

  15. Monte Carlo tests of the ELIPGRID-PC algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangularmore » sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.« less

  16. Off-diagonal expansion quantum Monte Carlo.

    PubMed

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  17. Monte Carlo derivation of filtered tungsten anode X-ray spectra for dose computation in digital mammography*

    PubMed Central

    Paixão, Lucas; Oliveira, Bruno Beraldo; Viloria, Carolina; de Oliveira, Marcio Alves; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2015-01-01

    Objective Derive filtered tungsten X-ray spectra used in digital mammography systems by means of Monte Carlo simulations. Materials and Methods Filtered spectra for rhodium filter were obtained for tube potentials between 26 and 32 kV. The half-value layer (HVL) of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F & MAM Detector Platinum and 8201023-C Xi Base unit Platinum Plus w mAs in a Hologic Selenia Dimensions system using a direct radiography mode. Results Calculated HVL values showed good agreement as compared with those obtained experimentally. The greatest relative difference between the Monte Carlo calculated HVL values and experimental HVL values was 4%. Conclusion The results show that the filtered tungsten anode X-ray spectra and the EGSnrc Monte Carlo code can be used for mean glandular dose determination in mammography. PMID:26811553

  18. Two proposed convergence criteria for Monte Carlo solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Pederson, S.P.; Booth, T.E.

    1992-01-01

    The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such asmore » statistical error reduction proportional to 1/[radical]N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf).« less

  19. Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan

    2018-02-01

    Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.

  20. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  1. Introduction of Parallel GPGPU Acceleration Algorithms for the Solution of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Godoy, William F.; Liu, Xu

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is a recent technique that allows the parallel graphics processing unit (GPU) to accelerate calculations performed sequentially by the central processing unit (CPU). To introduce GPGPU to radiative transfer, the Gauss-Seidel solution of the well-known expressions for 1-D and 3-D homogeneous, isotropic media is selected as a test case. Different algorithms are introduced to balance memory and GPU-CPU communication, critical aspects of GPGPU. Results show that speed-ups of one to two orders of magnitude are obtained when compared to sequential solutions. The underlying value of GPGPU is its potential extension in radiative solvers (e.g., Monte Carlo, discrete ordinates) at a minimal learning curve.

  2. Performance evaluation of an asynchronous multisensor track fusion filter

    NASA Astrophysics Data System (ADS)

    Alouani, Ali T.; Gray, John E.; McCabe, D. H.

    2003-08-01

    Recently the authors developed a new filter that uses data generated by asynchronous sensors to produce a state estimate that is optimal in the minimum mean square sense. The solution accounts for communications delay between sensors platform and fusion center. It also deals with out of sequence data as well as latent data by processing the information in a batch-like manner. This paper compares, using simulated targets and Monte Carlo simulations, the performance of the filter to the optimal sequential processing approach. It was found that the new asynchronous Multisensor track fusion filter (AMSTFF) performance is identical to that of the extended sequential Kalman filter (SEKF), while the new filter updates its track at a much lower rate than the SEKF.

  3. Monte Carlo analysis for the determination of the conic constant of an aspheric micro lens based on a scanning white light interferometric measurement

    NASA Astrophysics Data System (ADS)

    Gugsa, Solomon A.; Davies, Angela

    2005-08-01

    Characterizing an aspheric micro lens is critical for understanding the performance and providing feedback to the manufacturing. We describe a method to find the best-fit conic of an aspheric micro lens using a least squares minimization and Monte Carlo analysis. Our analysis is based on scanning white light interferometry measurements, and we compare the standard rapid technique where a single measurement is taken of the apex of the lens to the more time-consuming stitching technique where more surface area is measured. Both are corrected for tip/tilt based on a planar fit to the substrate. Four major parameters and their uncertainties are estimated from the measurement and a chi-square minimization is carried out to determine the best-fit conic constant. The four parameters are the base radius of curvature, the aperture of the lens, the lens center, and the sag of the lens. A probability distribution is chosen for each of the four parameters based on the measurement uncertainties and a Monte Carlo process is used to iterate the minimization process. Eleven measurements were taken and data is also chosen randomly from the group during the Monte Carlo simulation to capture the measurement repeatability. A distribution of best-fit conic constants results, where the mean is a good estimate of the best-fit conic and the distribution width represents the combined measurement uncertainty. We also compare the Monte Carlo process for the stitched data and the not stitched data. Our analysis allows us to analyze the residual surface error in terms of Zernike polynomials and determine uncertainty estimates for each coefficient.

  4. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

  5. Estimating Uncertainty in N2O Emissions from US Cropland Soils

    USDA-ARS?s Scientific Manuscript database

    A Monte Carlo analysis was combined with an empirically-based approach to quantify uncertainties in soil N2O emissions from US croplands estimated with the DAYCENT simulation model. Only a subset of croplands was simulated in the Monte Carlo analysis which was used to infer uncertainties across the ...

  6. SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enright, S; Asprinio, A; Lu, L

    2014-06-01

    Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup ™} EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup ™} radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. Allmore » phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%–99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.« less

  7. Experimental and Monte Carlo evaluation of Eclipse treatment planning system for effects on dose distribution of the hip prostheses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Çatlı, Serap, E-mail: serapcatli@hotmail.com; Tanır, Güneş

    2013-10-01

    The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the presentmore » study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.« less

  8. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    PubMed

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  9. A New Monte Carlo Method for Estimating Marginal Likelihoods.

    PubMed

    Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O

    2018-06-01

    Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.

  10. Towards predicting the encoding capability of MR fingerprinting sequences.

    PubMed

    Sommer, K; Amthor, T; Doneva, M; Koken, P; Meineke, J; Börnert, P

    2017-09-01

    Sequence optimization and appropriate sequence selection is still an unmet need in magnetic resonance fingerprinting (MRF). The main challenge in MRF sequence design is the lack of an appropriate measure of the sequence's encoding capability. To find such a measure, three different candidates for judging the encoding capability have been investigated: local and global dot-product-based measures judging dictionary entry similarity as well as a Monte Carlo method that evaluates the noise propagation properties of an MRF sequence. Consistency of these measures for different sequence lengths as well as the capability to predict actual sequence performance in both phantom and in vivo measurements was analyzed. While the dot-product-based measures yielded inconsistent results for different sequence lengths, the Monte Carlo method was in a good agreement with phantom experiments. In particular, the Monte Carlo method could accurately predict the performance of different flip angle patterns in actual measurements. The proposed Monte Carlo method provides an appropriate measure of MRF sequence encoding capability and may be used for sequence optimization. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    PubMed

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. How to improve an un-alterable model forecast? A sequential data assimilation based error updating approach

    NASA Astrophysics Data System (ADS)

    Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K. T.

    2012-12-01

    Accuracy of reservoir inflow forecasts is instrumental for maximizing value of water resources and influences operation of hydropower reservoirs significantly. Improving hourly reservoir inflow forecasts over a 24 hours lead-time is considered with the day-ahead (Elspot) market of the Nordic exchange market in perspectives. The procedure presented comprises of an error model added on top of an un-alterable constant parameter conceptual model, and a sequential data assimilation routine. The structure of the error model was investigated using freely available software for detecting mathematical relationships in a given dataset (EUREQA) and adopted to contain minimum complexity for computational reasons. As new streamflow data become available the extra information manifested in the discrepancies between measurements and conceptual model outputs are extracted and assimilated into the forecasting system recursively using Sequential Monte Carlo technique. Besides improving forecast skills significantly, the probabilistic inflow forecasts provided by the present approach entrains suitable information for reducing uncertainty in decision making processes related to hydropower systems operation. The potential of the current procedure for improving accuracy of inflow forecasts at lead-times unto 24 hours and its reliability in different seasons of the year will be illustrated and discussed thoroughly.

  13. Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?

    PubMed

    Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend

    2011-10-11

    In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.

  14. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    PubMed Central

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2018-01-01

    The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526

  15. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.

    PubMed

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-07

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  16. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  17. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    NASA Astrophysics Data System (ADS)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  18. Geodesic Monte Carlo on Embedded Manifolds

    PubMed Central

    Byrne, Simon; Girolami, Mark

    2013-01-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  19. Neoclassical toroidal viscosity calculations in tokamaks using a δf Monte Carlo simulation and their verifications.

    PubMed

    Satake, S; Park, J-K; Sugama, H; Kanno, R

    2011-07-29

    Neoclassical toroidal viscosities (NTVs) in tokamaks are investigated using a δf Monte Carlo simulation, and are successfully verified with a combined analytic theory over a wide range of collisionality. A Monte Carlo simulation has been required in the study of NTV since the complexities in guiding-center orbits of particles and their collisions cannot be fully investigated by any means of analytic theories alone. Results yielded the details of the complex NTV dependency on particle precessions and collisions, which were predicted roughly in a combined analytic theory. Both numerical and analytic methods can be utilized and extended based on these successful verifications.

  20. Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis

    NASA Astrophysics Data System (ADS)

    Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.

    As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.

  1. Multilevel Sequential2 Monte Carlo for Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Latz, Jonas; Papaioannou, Iason; Ullmann, Elisabeth

    2018-09-01

    The identification of parameters in mathematical models using noisy observations is a common task in uncertainty quantification. We employ the framework of Bayesian inversion: we combine monitoring and observational data with prior information to estimate the posterior distribution of a parameter. Specifically, we are interested in the distribution of a diffusion coefficient of an elliptic PDE. In this setting, the sample space is high-dimensional, and each sample of the PDE solution is expensive. To address these issues we propose and analyse a novel Sequential Monte Carlo (SMC) sampler for the approximation of the posterior distribution. Classical, single-level SMC constructs a sequence of measures, starting with the prior distribution, and finishing with the posterior distribution. The intermediate measures arise from a tempering of the likelihood, or, equivalently, a rescaling of the noise. The resolution of the PDE discretisation is fixed. In contrast, our estimator employs a hierarchy of PDE discretisations to decrease the computational cost. We construct a sequence of intermediate measures by decreasing the temperature or by increasing the discretisation level at the same time. This idea builds on and generalises the multi-resolution sampler proposed in P.S. Koutsourelakis (2009) [33] where a bridging scheme is used to transfer samples from coarse to fine discretisation levels. Importantly, our choice between tempering and bridging is fully adaptive. We present numerical experiments in 2D space, comparing our estimator to single-level SMC and the multi-resolution sampler.

  2. Depletion Calculations Based on Perturbations. Application to the Study of a Rep-Like Assembly at Beginning of Cycle with TRIPOLI-4®.

    NASA Astrophysics Data System (ADS)

    Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh

    2014-06-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.

  3. Optimization of the Monte Carlo code for modeling of photon migration in tissue.

    PubMed

    Zołek, Norbert S; Liebert, Adam; Maniewski, Roman

    2006-10-01

    The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.

  4. Study the sensitivity of dose calculation in prism treatment planning system using Monte Carlo simulation of 6 MeV electron beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardiansyah, D.; Haryanto, F.; Male, S.

    2014-09-30

    Prism is a non-commercial Radiotherapy Treatment Planning System (RTPS) develop by Ira J. Kalet from Washington University. Inhomogeneity factor is included in Prism TPS dose calculation. The aim of this study is to investigate the sensitivity of dose calculation on Prism using Monte Carlo simulation. Phase space source from head linear accelerator (LINAC) for Monte Carlo simulation is implemented. To achieve this aim, Prism dose calculation is compared with EGSnrc Monte Carlo simulation. Percentage depth dose (PDD) and R50 from both calculations are observed. BEAMnrc is simulated electron transport in LINAC head and produced phase space file. This file ismore » used as DOSXYZnrc input to simulated electron transport in phantom. This study is started with commissioning process in water phantom. Commissioning process is adjusted Monte Carlo simulation with Prism RTPS. Commissioning result is used for study of inhomogeneity phantom. Physical parameters of inhomogeneity phantom that varied in this study are: density, location and thickness of tissue. Commissioning result is shown that optimum energy of Monte Carlo simulation for 6 MeV electron beam is 6.8 MeV. This commissioning is used R50 and PDD with Practical length (R{sub p}) as references. From inhomogeneity study, the average deviation for all case on interest region is below 5 %. Based on ICRU recommendations, Prism has good ability to calculate the radiation dose in inhomogeneity tissue.« less

  5. EDITORIAL: Special section: Selected papers from the Third European Workshop on Monte Carlo Treatment Planning (MCTP2012) Special section: Selected papers from the Third European Workshop on Monte Carlo Treatment Planning (MCTP2012)

    NASA Astrophysics Data System (ADS)

    Spezi, Emiliano; Leal, Antonio

    2013-04-01

    The Third European Workshop on Monte Carlo Treatment Planning (MCTP2012) was held from 15-18 May, 2012 in Seville, Spain. The event was organized by the Universidad de Sevilla with the support of the European Workgroup on Monte Carlo Treatment Planning (EWG-MCTP). MCTP2012 followed two successful meetings, one held in Ghent (Belgium) in 2006 (Reynaert 2007) and one in Cardiff (UK) in 2009 (Spezi 2010). The recurrence of these workshops together with successful events held in parallel by McGill University in Montreal (Seuntjens et al 2012), show consolidated interest from the scientific community in Monte Carlo (MC) treatment planning. The workshop was attended by a total of 90 participants, mainly coming from a medical physics background. A total of 48 oral presentations and 15 posters were delivered in specific scientific sessions including dosimetry, code development, imaging, modelling of photon and electron radiation transport, external beam radiation therapy, nuclear medicine, brachitherapy and hadrontherapy. A copy of the programme is available on the workshop's website (www.mctp2012.com). In this special section of Physics in Medicine and Biology we report six papers that were selected following the journal's rigorous peer review procedure. These papers actually provide a good cross section of the areas of application of MC in treatment planning that were discussed at MCTP2012. Czarnecki and Zink (2013) and Wagner et al (2013) present the results of their work in small field dosimetry. Czarnecki and Zink (2013) studied field size and detector dependent correction factors for diodes and ion chambers within a clinical 6MV photon beam generated by a Siemens linear accelerator. Their modelling work based on the BEAMnrc/EGSnrc codes and experimental measurements revealed that unshielded diodes were the best choice for small field dosimetry because of their independence from the electron beam spot size and correction factor close to unity. Wagner et al (2013) investigated the recombination effect on liquid ionization chambers for stereotactic radiotherapy, a field of increasing importance in external beam radiotherapy. They modelled both radiation source (Cyberknife unit) and detector with the BEAMnrc/EGSnrc codes and quantified the dependence of the response of this type of detectors on factors such as the volume effect and the electrode. They also recommended that these dependences be accounted for in measurements involving small fields. In the field of external beam radiotherapy, Chakarova et al (2013) showed how total body irradiation (TBI) could be improved by simulating patient treatments with MC. In particular, BEAMnrc/EGSnrc based simulations highlighted the importance of optimizing individual compensators for TBI treatments. In the same area of application, Mairani et al (2013) reported on a new tool for treatment planning in proton therapy based on the FLUKA MC code. The software, used to model both proton therapy beam and patient anatomy, supports single-field and multiple-field optimization and can be used to optimize physical and relative biological effectiveness (RBE)-weighted dose distribution, using both constant and variable RBE models. In the field of nuclear medicine Marcatili et al (2013) presented RAYDOSE, a Geant4-based code specifically developed for applications in molecular radiotherapy (MRT). RAYDOSE has been designed to work in MRT trials using sequential positron emission tomography (PET) or single-photon emission tomography (SPECT) imaging to model patient specific time-dependent metabolic uptake and to calculate the total 3D dose distribution. The code was validated through experimental measurements in homogeneous and heterogeneous phantoms. Finally, in the field of code development Miras et al (2013) reported on CloudMC, a Windows Azure-based application for the parallelization of MC calculations in a dynamic cluster environment. Although the performance of CloudMC has been tested with the PENELOPE MC code, the authors report that software has been designed in a way that it should be independent of the type of MC code, provided that simulation meets a number of operational criteria. We wish to thank Elekta/CMS Inc., the University of Seville, the Junta of Andalusia and the European Regional Development Fund for their financial support. We would like also to acknowledge the members of EWG-MCTP for their help in peer-reviewing all the abstracts, and all the invited speakers who kindly agreed to deliver keynote presentations in their area of expertise. A final word of thanks to our colleagues who worked on the reviewing process of the papers selected for this special section and to the IOP Publishing staff who made it possible. MCTP2012 was accredited by the European Federation of Organisations for Medical Physics as a CPD event for medical physicists. Emiliano Spezi and Antonio Leal Guest Editors References Chakarova R, Müntzing K, Krantz M, E Hedin E and Hertzman S 2013 Monte Carlo optimization of total body irradiation in a phantom and patient geometry Phys. Med. Biol. 58 2461-69 Czarnecki D and Zink K 2013 Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields Phys. Med. Biol. 58 2431-44 Mairani A, Böhlen T T, Schiavi A, Tessonnier T, Molinelli S, Brons S, Battistoni G, Parodi K and Patera V 2013 A Monte Carlo-based treatment planning tool for proton therapy Phys. Med. Biol. 58 2471-90 Marcatili S, Pettinato C, Daniels S, Lewis G, Edwards P, Fanti S and Spezi E 2013 Development and validation of RAYDOSE: a Geant4 based application for molecular radiotherapy Phys. Med. Biol. 58 2491-508 Miras H, Jiménez R, Miras C and Gomà C 2013 CloudMC: A cloud computing application for Monte Carlo simulation Phys. Med. Biol. 58 N125-33 Reynaert N 2007 First European Workshop on Monte Carlo Treatment Planning J. Phys.: Conf. Ser. 74 011001 Seuntjens J, Beaulieu L, El Naqa I and Després P 2012 Special section: Selected papers from the Fourth International Workshop on Recent Advances in Monte Carlo Techniques for Radiation Therapy Phys. Med. Biol. 57 (11) E01 Spezi E 2010 Special section: Selected papers from the Second European Workshop on Monte Carlo Treatment Planning (MCTP2009) Phys. Med. Biol. 55 (16) E01 Wagner A, Crop F, Lacornerie T, Vandevelde F and Reynaert N 2013 Use of a liquid ionization chamber for stereotactic radiotherapy dosimetry Phys. Med. Biol. 58 2445-59

  6. Recent advances and future prospects for Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B

    2010-01-01

    The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codesmore » such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.« less

  7. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iandola, F N; O'Brien, M J; Procassini, R J

    2010-11-29

    Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improvesmore » usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.« less

  8. APS undulator and wiggler sources: Monte-Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, S.L.; Lai, B.; Viccaro, P.J.

    1992-02-01

    Standard insertion devices will be provided to each sector by the Advanced Photon Source. It is important to define the radiation characteristics of these general purpose devices. In this document,results of Monte-Carlo simulation are presented. These results, based on the SHADOW program, include the APS Undulator A (UA), Wiggler A (WA), and Wiggler B (WB).

  9. FluorWPS: A Monte Carlo ray-tracing model to compute sun-induced chlorophyll fluorescence of three-dimensional canopy

    USDA-ARS?s Scientific Manuscript database

    A model to simulate radiative transfer (RT) of sun-induced chlorophyll fluorescence (SIF) of three-dimensional (3-D) canopy, FluorWPS, was proposed and evaluated. The inclusion of fluorescence excitation was implemented with the ‘weight reduction’ and ‘photon spread’ concepts based on Monte Carlo ra...

  10. Vertical drying of a suspension of sticks: Monte Carlo simulation for continuous two-dimensional problem

    NASA Astrophysics Data System (ADS)

    Lebovka, Nikolai I.; Tarasevich, Yuri Yu.; Vygornitskii, Nikolai V.

    2018-02-01

    The vertical drying of a two-dimensional colloidal film containing zero-thickness sticks (lines) was studied by means of kinetic Monte Carlo (MC) simulations. The continuous two-dimensional problem for both the positions and orientations was considered. The initial state before drying was produced using a model of random sequential adsorption with isotropic orientations of the sticks. During the evaporation, an upper interface falls with a linear velocity in the vertical direction, and the sticks undergo translational and rotational Brownian motions. The MC simulations were run at different initial number concentrations (the numbers of sticks per unit area), pi, and solvent evaporation rates, u . For completely dried films, the spatial distributions of the sticks, the order parameters, and the electrical conductivities of the films in both the horizontal, x , and vertical, y , directions were examined. Significant evaporation-driven self-assembly and stratification of the sticks in the vertical direction was observed. The extent of stratification increased with increasing values of u . The anisotropy of the electrical conductivity of the film can be finely regulated by changes in the values of pi and u .

  11. Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods.

    PubMed

    Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J

    2017-03-03

    We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.

  12. Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods

    PubMed Central

    Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J.

    2017-01-01

    We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter. PMID:28273796

  13. A Monte Carlo simulation study for the gamma-ray/neutron dual-particle imager using rotational modulation collimator (RMC).

    PubMed

    Kim, Hyun Suk; Choi, Hong Yeop; Lee, Gyemin; Ye, Sung-Joon; Smith, Martin B; Kim, Geehyun

    2018-03-01

    The aim of this work is to develop a gamma-ray/neutron dual-particle imager, based on rotational modulation collimators (RMCs) and pulse shape discrimination (PSD)-capable scintillators, for possible applications for radioactivity monitoring as well as nuclear security and safeguards. A Monte Carlo simulation study was performed to design an RMC system for the dual-particle imaging, and modulation patterns were obtained for gamma-ray and neutron sources in various configurations. We applied an image reconstruction algorithm utilizing the maximum-likelihood expectation-maximization method based on the analytical modeling of source-detector configurations, to the Monte Carlo simulation results. Both gamma-ray and neutron source distributions were reconstructed and evaluated in terms of signal-to-noise ratio, showing the viability of developing an RMC-based gamma-ray/neutron dual-particle imager using PSD-capable scintillators.

  14. Monte Carlo Modeling of the Initial Radiation Emitted by a Nuclear Device in the National Capital Region

    DTIC Science & Technology

    2013-07-01

    also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32  B.  MCNP PHYSICS OPTIONS ......................................................................................... 33  C.  HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon

  15. Fixed-node quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Anderson, James B.

    Quantum Monte Carlo methods cannot at present provide exact solutions of the Schrödinger equation for systems with more than a few electrons. But, quantum Monte Carlo calculations can provide very low energy, highly accurate solutions for many systems ranging up to several hundred electrons. These systems include atoms such as Be and Fe, molecules such as H2O, CH4, and HF, and condensed materials such as solid N2 and solid silicon. The quantum Monte Carlo predictions of their energies and structures may not be `exact', but they are the best available. Most of the Monte Carlo calculations for these systems have been carried out using approximately correct fixed nodal hypersurfaces and they have come to be known as `fixed-node quantum Monte Carlo' calculations. In this paper we review these `fixed node' calculations and the accuracies they yield.

  16. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  17. Data decomposition of Monte Carlo particle transport simulations via tally servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithmmore » in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.« less

  18. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  19. Prompt Radiation Protection Factors

    DTIC Science & Technology

    2018-02-01

    dimensional Monte-Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection factors (ratio of dose in the open to...radiation was performed using the three dimensional Monte- Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection...by detonation of a nuclear device have placed renewed emphasis on evaluation of the consequences in case of such an event. The Defense Threat

  20. The Equity Indexed Annuity: A Monte Carlo Forensic Investigation into a Controversial Financial Product

    ERIC Educational Resources Information Center

    Carver, Andrew B.

    2013-01-01

    Equity Indexed Annuities (EIAs) are controversial financial products because the payoffs to investors are based on formulas that are supposedly too complex for average investors to understand. This brief describes how Monte Carlo simulation can provide insight into the true risk and return of an EIA. This approach can be used as a project…

  1. Exact and Monte carlo resampling procedures for the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests.

    PubMed

    Berry, K J; Mielke, P W

    2000-12-01

    Exact and Monte Carlo resampling FORTRAN programs are described for the Wilcoxon-Mann-Whitney rank sum test and the Kruskal-Wallis one-way analysis of variance for ranks test. The program algorithms compensate for tied values and do not depend on asymptotic approximations for probability values, unlike most algorithms contained in PC-based statistical software packages.

  2. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, Andreu; Badano, Aldo

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-raymore » imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.« less

  3. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  4. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  5. Error in telemetry studies: Effects of animal movement on triangulation

    USGS Publications Warehouse

    Schmutz, Joel A.; White, Gary C.

    1990-01-01

    We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.

  6. Validation of the Monte Carlo simulator GATE for indium-111 imaging.

    PubMed

    Assié, K; Gardin, I; Véra, P; Buvat, I

    2005-07-07

    Monte Carlo simulations are useful for optimizing and assessing single photon emission computed tomography (SPECT) protocols, especially when aiming at measuring quantitative parameters from SPECT images. Before Monte Carlo simulated data can be trusted, the simulation model must be validated. The purpose of this work was to validate the use of GATE, a new Monte Carlo simulation platform based on GEANT4, for modelling indium-111 SPECT data, the quantification of which is of foremost importance for dosimetric studies. To that end, acquisitions of (111)In line sources in air and in water and of a cylindrical phantom were performed, together with the corresponding simulations. The simulation model included Monte Carlo modelling of the camera collimator and of a back-compartment accounting for photomultiplier tubes and associated electronics. Energy spectra, spatial resolution, sensitivity values, images and count profiles obtained for experimental and simulated data were compared. An excellent agreement was found between experimental and simulated energy spectra. For source-to-collimator distances varying from 0 to 20 cm, simulated and experimental spatial resolution differed by less than 2% in air, while the simulated sensitivity values were within 4% of the experimental values. The simulation of the cylindrical phantom closely reproduced the experimental data. These results suggest that GATE enables accurate simulation of (111)In SPECT acquisitions.

  7. Monte Carlo modeling of spatial coherence: free-space diffraction

    PubMed Central

    Fischer, David G.; Prahl, Scott A.; Duncan, Donald D.

    2008-01-01

    We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335

  8. Visual improvement for bad handwriting based on Monte-Carlo method

    NASA Astrophysics Data System (ADS)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2014-03-01

    A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.

  9. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  10. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Crevillén-García, D.; Power, H.

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  11. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media.

    PubMed

    Crevillén-García, D; Power, H

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  12. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    PubMed Central

    Power, H.

    2017-01-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974

  13. Monte Carlo based method for fluorescence tomographic imaging with lifetime multiplexing using time gates

    PubMed Central

    Chen, Jin; Venugopal, Vivek; Intes, Xavier

    2011-01-01

    Time-resolved fluorescence optical tomography allows 3-dimensional localization of multiple fluorophores based on lifetime contrast while providing a unique data set for improved resolution. However, to employ the full fluorescence time measurements, a light propagation model that accurately simulates weakly diffused and multiple scattered photons is required. In this article, we derive a computationally efficient Monte Carlo based method to compute time-gated fluorescence Jacobians for the simultaneous imaging of two fluorophores with lifetime contrast. The Monte Carlo based formulation is validated on a synthetic murine model simulating the uptake in the kidneys of two distinct fluorophores with lifetime contrast. Experimentally, the method is validated using capillaries filled with 2.5nmol of ICG and IRDye™800CW respectively embedded in a diffuse media mimicking the average optical properties of mice. Combining multiple time gates in one inverse problem allows the simultaneous reconstruction of multiple fluorophores with increased resolution and minimal crosstalk using the proposed formulation. PMID:21483610

  14. A Monte Carlo method using octree structure in photon and electron transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogawa, K.; Maeda, S.

    Most of the early Monte Carlo calculations in medical physics were used to calculate absorbed dose distributions, and detector responses and efficiencies. Recently, data acquisition in Single Photon Emission CT (SPECT) has been simulated by a Monte Carlo method to evaluate scatter photons generated in a human body and a collimator. Monte Carlo simulations in SPECT data acquisition are generally based on the transport of photons only because the photons being simulated are low energy, and therefore the bremsstrahlung productions by the electrons generated are negligible. Since the transport calculation of photons without electrons is much simpler than that withmore » electrons, it is possible to accomplish the high-speed simulation in a simple object with one medium. Here, object description is important in performing the photon and/or electron transport using a Monte Carlo method efficiently. The authors propose a new description method using an octree representation of an object. Thus even if the boundaries of each medium are represented accurately, high-speed calculation of photon transport can be accomplished because the number of voxels is much fewer than that of the voxel-based approach which represents an object by a union of the voxels of the same size. This Monte Carlo code using the octree representation of an object first establishes the simulation geometry by reading octree string, which is produced by forming an octree structure from a set of serial sections for the object before the simulation; then it transports photons in the geometry. Using the code, if the user just prepares a set of serial sections for the object in which he or she wants to simulate photon trajectories, he or she can perform the simulation automatically using the suboptimal geometry simplified by the octree representation without forming the optimal geometry by handwriting.« less

  15. SU-E-T-222: Computational Optimization of Monte Carlo Simulation On 4D Treatment Planning Using the Cloud Computing Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, J

    Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of computemore » node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant.« less

  16. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Treesearch

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  17. Quantum Gibbs ensemble Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fantoni, Riccardo, E-mail: rfantoni@ts.infn.it; Moroni, Saverio, E-mail: moroni@democritos.it

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  18. Systematic evaluation of a time-domain Monte Carlo fitting routine to estimate the adult brain optical properties

    NASA Astrophysics Data System (ADS)

    Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.

    2013-03-01

    Time-domain near-infrared spectroscopy (TD-NIRS) offers the ability to measure the absolute baseline optical properties of a tissue. Specifically, for brain imaging, the robust assessment of cerebral blood volume and oxygenation based on measurement of cerebral hemoglobin concentrations is essential for reliable cross-sectional and longitudinal studies. In adult heads, these baseline measurements are complicated by the presence of thick extra-cerebral tissue (scalp, skull, CSF). A simple semi-infinite homogeneous model of the head has proven to have limited use because of the large errors it introduces in the recovered brain absorption. Analytical solutions for layered media have shown improved performance on Monte-Carlo simulated data and layered phantom experiments, but their validity on real adult head data has never been demonstrated. With the advance of fast Monte Carlo approaches based on GPU computation, numerical methods to solve the radiative transfer equation become viable alternatives to analytical solutions of the diffusion equation. Monte Carlo approaches provide the additional advantage to be adaptable to any geometry, in particular more realistic head models. The goals of the present study were twofold: (1) to implement a fast and flexible Monte Carlo-based fitting routine to retrieve the brain optical properties; (2) to characterize the performances of this fitting method on realistic adult head data. We generated time-resolved data at various locations over the head, and fitted them with different models of light propagation: the homogeneous analytical model, and Monte Carlo simulations for three head models: a two-layer slab, the true subject's anatomy, and that of a generic atlas head. We found that the homogeneous model introduced a median 20 to 25% error on the recovered brain absorption, with large variations over the range of true optical properties. The two-layer slab model only improved moderately the results over the homogeneous one. On the other hand, using a generic atlas head registered to the subject's head surface decreased the error by a factor of 2. When the information is available, using the true subject anatomy offers the best performance.

  19. Radial secondary electron dose profiles and biological effects in light-ion beams based on analytical and Monte Carlo calculations using distorted wave cross sections.

    PubMed

    Wiklund, Kristin; Olivera, Gustavo H; Brahme, Anders; Lind, Bengt K

    2008-07-01

    To speed up dose calculation, an analytical pencil-beam method has been developed to calculate the mean radial dose distributions due to secondary electrons that are set in motion by light ions in water. For comparison, radial dose profiles calculated using a Monte Carlo technique have also been determined. An accurate comparison of the resulting radial dose profiles of the Bragg peak for (1)H(+), (4)He(2+) and (6)Li(3+) ions has been performed. The double differential cross sections for secondary electron production were calculated using the continuous distorted wave-eikonal initial state method (CDW-EIS). For the secondary electrons that are generated, the radial dose distribution for the analytical case is based on the generalized Gaussian pencil-beam method and the central axis depth-dose distributions are calculated using the Monte Carlo code PENELOPE. In the Monte Carlo case, the PENELOPE code was used to calculate the whole radial dose profile based on CDW data. The present pencil-beam and Monte Carlo calculations agree well at all radii. A radial dose profile that is shallower at small radii and steeper at large radii than the conventional 1/r(2) is clearly seen with both the Monte Carlo and pencil-beam methods. As expected, since the projectile velocities are the same, the dose profiles of Bragg-peak ions of 0.5 MeV (1)H(+), 2 MeV (4)He(2+) and 3 MeV (6)Li(3+) are almost the same, with about 30% more delta electrons in the sub keV range from (4)He(2+)and (6)Li(3+) compared to (1)H(+). A similar behavior is also seen for 1 MeV (1)H(+), 4 MeV (4)He(2+) and 6 MeV (6)Li(3+), all classically expected to have the same secondary electron cross sections. The results are promising and indicate a fast and accurate way of calculating the mean radial dose profile.

  20. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  1. High-Throughput Computation and the Applicability of Monte Carlo Integration in Fatigue Load Estimation of Floating Offshore Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter A.; Stewart, Gordon; Lackner, Matthew

    Long-term fatigue loads for floating offshore wind turbines are hard to estimate because they require the evaluation of the integral of a highly nonlinear function over a wide variety of wind and wave conditions. Current design standards involve scanning over a uniform rectangular grid of metocean inputs (e.g., wind speed and direction and wave height and period), which becomes intractable in high dimensions as the number of required evaluations grows exponentially with dimension. Monte Carlo integration offers a potentially efficient alternative because it has theoretical convergence proportional to the inverse of the square root of the number of samples, whichmore » is independent of dimension. In this paper, we first report on the integration of the aeroelastic code FAST into NREL's systems engineering tool, WISDEM, and the development of a high-throughput pipeline capable of sampling from arbitrary distributions, running FAST on a large scale, and postprocessing the results into estimates of fatigue loads. Second, we use this tool to run a variety of studies aimed at comparing grid-based and Monte Carlo-based approaches with calculating long-term fatigue loads. We observe that for more than a few dimensions, the Monte Carlo approach can represent a large improvement in computational efficiency, but that as nonlinearity increases, the effectiveness of Monte Carlo is correspondingly reduced. The present work sets the stage for future research focusing on using advanced statistical methods for analysis of wind turbine fatigue as well as extreme loads.« less

  2. A dental public health approach based on computational mathematics: Monte Carlo simulation of childhood dental decay.

    PubMed

    Tennant, Marc; Kruger, Estie

    2013-02-01

    This study developed a Monte Carlo simulation approach to examining the prevalence and incidence of dental decay using Australian children as a test environment. Monte Carlo simulation has been used for a half a century in particle physics (and elsewhere); put simply, it is the probability for various population-level outcomes seeded randomly to drive the production of individual level data. A total of five runs of the simulation model for all 275,000 12-year-olds in Australia were completed based on 2005-2006 data. Measured on average decayed/missing/filled teeth (DMFT) and DMFT of highest 10% of sample (Sic10) the runs did not differ from each other by more than 2% and the outcome was within 5% of the reported sampled population data. The simulations rested on the population probabilities that are known to be strongly linked to dental decay, namely, socio-economic status and Indigenous heritage. Testing the simulated population found DMFT of all cases where DMFT<>0 was 2.3 (n = 128,609) and DMFT for Indigenous cases only was 1.9 (n = 13,749). In the simulation population the Sic25 was 3.3 (n = 68,750). Monte Carlo simulations were created in particle physics as a computational mathematical approach to unknown individual-level effects by resting a simulation on known population-level probabilities. In this study a Monte Carlo simulation approach to childhood dental decay was built, tested and validated. © 2013 FDI World Dental Federation.

  3. Incremental Bayesian Category Learning From Natural Language.

    PubMed

    Frermann, Lea; Lapata, Mirella

    2016-08-01

    Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words (e.g., chair is a member of the furniture category). We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: (a) the acquisition of features that discriminate among categories, and (b) the grouping of concepts into categories based on those features. Our model learns categories incrementally using particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference that sequentially integrates newly observed data and can be viewed as a plausible mechanism for human learning. Experimental results show that our incremental learner obtains meaningful categories which yield a closer fit to behavioral data compared to related models while at the same time acquiring features which characterize the learned categories. (An earlier version of this work was published in Frermann and Lapata .). Copyright © 2015 Cognitive Science Society, Inc.

  4. Accurate anharmonic zero-point energies for some combustion-related species from diffusion Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, Lawrence B.; Georgievskii, Yuri; Klippenstein, Stephen J.

    Full dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic, zero point energies. Here, the resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower level electronic structure methods (B3LYP and MP2).

  5. Accurate Anharmonic Zero-Point Energies for Some Combustion-Related Species from Diffusion Monte Carlo.

    PubMed

    Harding, Lawrence B; Georgievskii, Yuri; Klippenstein, Stephen J

    2017-06-08

    Full-dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion-related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic zero-point energies. The resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower-level electronic structure methods (B3LYP and MP2).

  6. Accurate anharmonic zero-point energies for some combustion-related species from diffusion Monte Carlo

    DOE PAGES

    Harding, Lawrence B.; Georgievskii, Yuri; Klippenstein, Stephen J.

    2017-05-17

    Full dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic, zero point energies. Here, the resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower level electronic structure methods (B3LYP and MP2).

  7. The Monte Carlo code MCPTV--Monte Carlo dose calculation in radiation therapy with carbon ions.

    PubMed

    Karg, Juergen; Speer, Stefan; Schmidt, Manfred; Mueller, Reinhold

    2010-07-07

    The Monte Carlo code MCPTV is presented. MCPTV is designed for dose calculation in treatment planning in radiation therapy with particles and especially carbon ions. MCPTV has a voxel-based concept and can perform a fast calculation of the dose distribution on patient CT data. Material and density information from CT are taken into account. Electromagnetic and nuclear interactions are implemented. Furthermore the algorithm gives information about the particle spectra and the energy deposition in each voxel. This can be used to calculate the relative biological effectiveness (RBE) for each voxel. Depth dose distributions are compared to experimental data giving good agreement. A clinical example is shown to demonstrate the capabilities of the MCPTV dose calculation.

  8. Split Orthogonal Group: A Guiding Principle for Sign-Problem-Free Fermionic Simulations

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Ye-Hua; Iazzi, Mauro; Troyer, Matthias; Harcos, Gergely

    2015-12-01

    We present a guiding principle for designing fermionic Hamiltonians and quantum Monte Carlo (QMC) methods that are free from the infamous sign problem by exploiting the Lie groups and Lie algebras that appear naturally in the Monte Carlo weight of fermionic QMC simulations. Specifically, rigorous mathematical constraints on the determinants involving matrices that lie in the split orthogonal group provide a guideline for sign-free simulations of fermionic models on bipartite lattices. This guiding principle not only unifies the recent solutions of the sign problem based on the continuous-time quantum Monte Carlo methods and the Majorana representation, but also suggests new efficient algorithms to simulate physical systems that were previously prohibitive because of the sign problem.

  9. Study of photo-oxidative reactivity of sunscreening agents based on photo-oxidation of uric acid by kinetic Monte Carlo simulation.

    PubMed

    Moradmand Jalali, Hamed; Bashiri, Hadis; Rasa, Hossein

    2015-05-01

    In the present study, the mechanism of free radical production by light-reflective agents in sunscreens (TiO2, ZnO and ZrO2) was obtained by applying kinetic Monte Carlo simulation. The values of the rate constants for each step of the suggested mechanism have been obtained by simulation. The effect of the initial concentration of mineral oxides and uric acid on the rate of uric acid photo-oxidation by irradiation of some sun care agents has been studied. The kinetic Monte Carlo simulation results agree qualitatively with the existing experimental data for the production of free radicals by sun care agents. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudhyadhom, A; McGuinness, C; Descovich, M

    Purpose: To develop a methodology for validation of a Monte-Carlo dose calculation model for robotic small field SRS/SBRT deliveries. Methods: In a robotic treatment planning system, a Monte-Carlo model was iteratively optimized to match with beam data. A two-part analysis was developed to verify this model. 1) The Monte-Carlo model was validated in a simulated water phantom versus a Ray-Tracing calculation on a single beam collimator-by-collimator calculation. 2) The Monte-Carlo model was validated to be accurate in the most challenging situation, lung, by acquiring in-phantom measurements. A plan was created and delivered in a CIRS lung phantom with film insert.more » Separately, plans were delivered in an in-house created lung phantom with a PinPoint chamber insert within a lung simulating material. For medium to large collimator sizes, a single beam was delivered to the phantom. For small size collimators (10, 12.5, and 15mm), a robotically delivered plan was created to generate a uniform dose field of irradiation over a 2×2cm{sup 2} area. Results: Dose differences in simulated water between Ray-Tracing and Monte-Carlo were all within 1% at dmax and deeper. Maximum dose differences occurred prior to dmax but were all within 3%. Film measurements in a lung phantom show high correspondence of over 95% gamma at the 2%/2mm level for Monte-Carlo. Ion chamber measurements for collimator sizes of 12.5mm and above were within 3% of Monte-Carlo calculated values. Uniform irradiation involving the 10mm collimator resulted in a dose difference of ∼8% for both Monte-Carlo and Ray-Tracing indicating that there may be limitations with the dose calculation. Conclusion: We have developed a methodology to validate a Monte-Carlo model by verifying that it matches in water and, separately, that it corresponds well in lung simulating materials. The Monte-Carlo model and algorithm tested may have more limited accuracy for 10mm fields and smaller.« less

  11. DEVELOPMENT OF A MULTIMODAL MONTE CARLO BASED TREATMENT PLANNING SYSTEM.

    PubMed

    Kumada, Hiroaki; Takada, Kenta; Sakurai, Yoshinori; Suzuki, Minoru; Takata, Takushi; Sakurai, Hideyuki; Matsumura, Akira; Sakae, Takeji

    2017-10-26

    To establish boron neutron capture therapy (BNCT), the University of Tsukuba is developing a treatment device and peripheral devices required in BNCT, such as a treatment planning system. We are developing a new multimodal Monte Carlo based treatment planning system (developing code: Tsukuba Plan). Tsukuba Plan allows for dose estimation in proton therapy, X-ray therapy and heavy ion therapy in addition to BNCT because the system employs PHITS as the Monte Carlo dose calculation engine. Regarding BNCT, several verifications of the system are being carried out for its practical usage. The verification results demonstrate that Tsukuba Plan allows for accurate estimation of thermal neutron flux and gamma-ray dose as fundamental radiations of dosimetry in BNCT. In addition to the practical use of Tsukuba Plan in BNCT, we are investigating its application to other radiation therapies. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Intrinsic fluorescence of protein in turbid media using empirical relation based on Monte Carlo lookup table

    NASA Astrophysics Data System (ADS)

    Einstein, Gnanatheepam; Udayakumar, Kanniyappan; Aruna, Prakasarao; Ganesan, Singaravelu

    2017-03-01

    Fluorescence of Protein has been widely used in diagnostic oncology for characterizing cellular metabolism. However, the intensity of fluorescence emission is affected due to the absorbers and scatterers in tissue, which may lead to error in estimating exact protein content in tissue. Extraction of intrinsic fluorescence from measured fluorescence has been achieved by different methods. Among them, Monte Carlo based method yields the highest accuracy for extracting intrinsic fluorescence. In this work, we have attempted to generate a lookup table for Monte Carlo simulation of fluorescence emission by protein. Furthermore, we fitted the generated lookup table using an empirical relation. The empirical relation between measured and intrinsic fluorescence is validated using tissue phantom experiments. The proposed relation can be used for estimating intrinsic fluorescence of protein for real-time diagnostic applications and thereby improving the clinical interpretation of fluorescence spectroscopic data.

  13. Monte Carlo Simulation of THz Multipliers

    NASA Technical Reports Server (NTRS)

    East, J.; Blakey, P.

    1997-01-01

    Schottky Barrier diode frequency multipliers are critical components in submillimeter and Thz space based earth observation systems. As the operating frequency of these multipliers has increased, the agreement between design predictions and experimental results has become poorer. The multiplier design is usually based on a nonlinear model using a form of harmonic balance and a model for the Schottky barrier diode. Conventional voltage dependent lumped element models do a poor job of predicting THz frequency performance. This paper will describe a large signal Monte Carlo simulation of Schottky barrier multipliers. The simulation is a time dependent particle field Monte Carlo simulation with ohmic and Schottky barrier boundary conditions included that has been combined with a fixed point solution for the nonlinear circuit interaction. The results in the paper will point out some important time constants in varactor operation and will describe the effects of current saturation and nonlinear resistances on multiplier operation.

  14. Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.

    PubMed

    Beentjes, Casper H L; Baker, Ruth E

    2018-05-25

    Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.

  15. Ascertainment correction for Markov chain Monte Carlo segregation and linkage analysis of a quantitative trait.

    PubMed

    Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E

    2007-09-01

    Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.

  16. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  17. Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.

    PubMed

    Serebrinsky, Santiago A

    2011-03-01

    We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.

  18. Weighted Markov chains for forecasting and analysis in Incidence of infectious diseases in jiangsu Province, China☆

    PubMed Central

    Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng

    2010-01-01

    This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology. PMID:23554632

  19. Weighted Markov chains for forecasting and analysis in Incidence of infectious diseases in jiangsu Province, China.

    PubMed

    Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng

    2010-05-01

    This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology.

  20. Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, Surendra N.

    1994-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.

  1. Monte Carlo Methods in Materials Science Based on FLUKA and ROOT

    NASA Technical Reports Server (NTRS)

    Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor

    2003-01-01

    A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the CERN ALICE (A Large Ion Collisions Experiment) software team through an adaptation of their existing AliROOT (ALICE Using ROOT) architecture. In order to check our progress against actual data, we have chosen to simulate the ATIC14 (Advanced Thin Ionization Calorimeter) cosmic-ray astrophysics balloon payload as well as neutron fluences in the Mir spacecraft. This paper contains a summary of status of this project, and a roadmap to its successful completion.

  2. Monte Carlo Transport for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  3. Energy dispersive X-ray fluorescence spectroscopy/Monte Carlo simulation approach for the non-destructive analysis of corrosion patina-bearing alloys in archaeological bronzes: The case of the bowl from the Fareleira 3 site (Vidigueira, South Portugal)

    NASA Astrophysics Data System (ADS)

    Bottaini, C.; Mirão, J.; Figuereido, M.; Candeias, A.; Brunetti, A.; Schiavon, N.

    2015-01-01

    Energy dispersive X-ray fluorescence (EDXRF) is a well-known technique for non-destructive and in situ analysis of archaeological artifacts both in terms of the qualitative and quantitative elemental composition because of its rapidity and non-destructiveness. In this study EDXRF and realistic Monte Carlo simulation using the X-ray Monte Carlo (XRMC) code package have been combined to characterize a Cu-based bowl from the Iron Age burial from Fareleira 3 (Southern Portugal). The artifact displays a multilayered structure made up of three distinct layers: a) alloy substrate; b) green oxidized corrosion patina; and c) brownish carbonate soil-derived crust. To assess the reliability of Monte Carlo simulation in reproducing the composition of the bulk metal of the objects without recurring to potentially damaging patina's and crust's removal, portable EDXRF analysis was performed on cleaned and patina/crust coated areas of the artifact. Patina has been characterized by micro X-ray Diffractometry (μXRD) and Back-Scattered Scanning Electron Microscopy + Energy Dispersive Spectroscopy (BSEM + EDS). Results indicate that the EDXRF/Monte Carlo protocol is well suited when a two-layered model is considered, whereas in areas where the patina + crust surface coating is too thick, X-rays from the alloy substrate are not able to exit the sample.

  4. CloudMC: a cloud computing application for Monte Carlo simulation.

    PubMed

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  5. Experimental verification of a CT-based Monte Carlo dose-calculation method in heterogeneous phantoms.

    PubMed

    Wang, L; Lovelock, M; Chui, C S

    1999-12-01

    To further validate the Monte Carlo dose-calculation method [Med. Phys. 25, 867-878 (1998)] developed at the Memorial Sloan-Kettering Cancer Center, we have performed experimental verification in various inhomogeneous phantoms. The phantom geometries included simple layered slabs, a simulated bone column, a simulated missing-tissue hemisphere, and an anthropomorphic head geometry (Alderson Rando Phantom). The densities of the inhomogeneity range from 0.14 to 1.86 g/cm3, simulating both clinically relevant lunglike and bonelike materials. The data are reported as central axis depth doses, dose profiles, dose values at points of interest, such as points at the interface of two different media and in the "nasopharynx" region of the Rando head. The dosimeters used in the measurement included dosimetry film, TLD chips, and rods. The measured data were compared to that of Monte Carlo calculations for the same geometrical configurations. In the case of the Rando head phantom, a CT scan of the phantom was used to define the calculation geometry and to locate the points of interest. The agreement between the calculation and measurement is generally within 2.5%. This work validates the accuracy of the Monte Carlo method. While Monte Carlo, at present, is still too slow for routine treatment planning, it can be used as a benchmark against which other dose calculation methods can be compared.

  6. Reconstruction of Human Monte Carlo Geometry from Segmented Images

    NASA Astrophysics Data System (ADS)

    Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican

    2014-06-01

    Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified

  7. Dosimetric verification of IMRT treatment planning using Monte Carlo simulations for prostate cancer

    NASA Astrophysics Data System (ADS)

    Yang, J.; Li, J.; Chen, L.; Price, R.; McNeeley, S.; Qin, L.; Wang, L.; Xiong, W.; Ma, C.-M.

    2005-03-01

    The purpose of this work is to investigate the accuracy of dose calculation of a commercial treatment planning system (Corvus, Normos Corp., Sewickley, PA). In this study, 30 prostate intensity-modulated radiotherapy (IMRT) treatment plans from the commercial treatment planning system were recalculated using the Monte Carlo method. Dose-volume histograms and isodose distributions were compared. Other quantities such as minimum dose to the target (Dmin), the dose received by 98% of the target volume (D98), dose at the isocentre (Diso), mean target dose (Dmean) and the maximum critical structure dose (Dmax) were also evaluated based on our clinical criteria. For coplanar plans, the dose differences between Monte Carlo and the commercial treatment planning system with and without heterogeneity correction were not significant. The differences in the isocentre dose between the commercial treatment planning system and Monte Carlo simulations were less than 3% for all coplanar cases. The differences on D98 were less than 2% on average. The differences in the mean dose to the target between the commercial system and Monte Carlo results were within 3%. The differences in the maximum bladder dose were within 3% for most cases. The maximum dose differences for the rectum were less than 4% for all the cases. For non-coplanar plans, the difference in the minimum target dose between the treatment planning system and Monte Carlo calculations was up to 9% if the heterogeneity correction was not applied in Corvus. This was caused by the excessive attenuation of the non-coplanar beams by the femurs. When the heterogeneity correction was applied in Corvus, the differences were reduced significantly. These results suggest that heterogeneity correction should be used in dose calculation for prostate cancer with non-coplanar beam arrangements.

  8. Advanced Computational Methods for Monte Carlo Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  9. Brownian dynamics and dynamic Monte Carlo simulations of isotropic and liquid crystal phases of anisotropic colloidal particles: a comparative study.

    PubMed

    Patti, Alessandro; Cuetos, Alejandro

    2012-07-01

    We report on the diffusion of purely repulsive and freely rotating colloidal rods in the isotropic, nematic, and smectic liquid crystal phases to probe the agreement between Brownian and Monte Carlo dynamics under the most general conditions. By properly rescaling the Monte Carlo time step, being related to any elementary move via the corresponding self-diffusion coefficient, with the acceptance rate of simultaneous trial displacements and rotations, we demonstrate the existence of a unique Monte Carlo time scale that allows for a direct comparison between Monte Carlo and Brownian dynamics simulations. To estimate the validity of our theoretical approach, we compare the mean square displacement of rods, their orientational autocorrelation function, and the self-intermediate scattering function, as obtained from Brownian dynamics and Monte Carlo simulations. The agreement between the results of these two approaches, even under the condition of heterogeneous dynamics generally observed in liquid crystalline phases, is excellent.

  10. Kinetic Activation-Relaxation Technique and Self-Evolving Atomistic Kinetic Monte Carlo: Comparison of on-the-fly kinetic Monte Carlo algorithms

    DOE PAGES

    Beland, Laurent Karim; Osetskiy, Yury N.; Stoller, Roger E.; ...

    2015-02-07

    Here, we present a comparison of the Kinetic Activation–Relaxation Technique (k-ART) and the Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC), two off-lattice, on-the-fly Kinetic Monte Carlo (KMC) techniques that were recently used to solve several materials science problems. We show that if the initial displacements are localized the dimer method and the Activation–Relaxation Technique nouveau provide similar performance. We also show that k-ART and SEAKMC, although based on different approximations, are in agreement with each other, as demonstrated by the examples of 50 vacancies in a 1950-atom Fe box and of interstitial loops in 16,000-atom boxes. Generally speaking, k-ART’s treatment ofmore » geometry and flickers is more flexible, e.g. it can handle amorphous systems, and rigorous than SEAKMC’s, while the later’s concept of active volumes permits a significant speedup of simulations for the systems under consideration and therefore allows investigations of processes requiring large systems that are not accessible if not localizing calculations.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.

    In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less

  12. A Modified Monte Carlo Method for Carrier Transport in Germanium, Free of Isotropic Rates

    NASA Astrophysics Data System (ADS)

    Sundqvist, Kyle

    2010-03-01

    We present a new method for carrier transport simulation, relevant for high-purity germanium < 100 > at a temperature of 40 mK. In this system, the scattering of electrons and holes is dominated by spontaneous phonon emission. Free carriers are always out of equilibrium with the lattice. We must also properly account for directional effects due to band structure, but there are many cautions in the literature about treating germanium in particular. These objections arise because the germanium electron system is anisotropic to an extreme degree, while standard Monte Carlo algorithms maintain a reliance on isotropic, integrated rates. We re-examine Fermi's Golden Rule to produce a Monte Carlo method free of isotropic rates. Traditional Monte Carlo codes implement particle scattering based on an isotropically averaged rate, followed by a separate selection of the particle's final state via a momentum-dependent probability. In our method, the kernel of Fermi's Golden Rule produces analytical, bivariate rates which allow for the simultaneous choice of scatter and final state selection. Energy and momentum are automatically conserved. We compare our results to experimental data.

  13. Gray: a ray tracing-based Monte Carlo simulator for PET

    NASA Astrophysics Data System (ADS)

    Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.

    2018-05-01

    Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.

  14. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    NASA Astrophysics Data System (ADS)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  15. Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model

    NASA Technical Reports Server (NTRS)

    White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.

    1989-01-01

    A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.

  16. Multistep Lattice-Voxel method utilizing lattice function for Monte-Carlo treatment planning with pixel based voxel model.

    PubMed

    Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K

    2011-12-01

    Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Monte Carlo replica-exchange based ensemble docking of protein conformations.

    PubMed

    Zhang, Zhe; Ehmann, Uwe; Zacharias, Martin

    2017-05-01

    A replica-exchange Monte Carlo (REMC) ensemble docking approach has been developed that allows efficient exploration of protein-protein docking geometries. In addition to Monte Carlo steps in translation and orientation of binding partners, possible conformational changes upon binding are included based on Monte Carlo selection of protein conformations stored as ordered pregenerated conformational ensembles. The conformational ensembles of each binding partner protein were generated by three different approaches starting from the unbound partner protein structure with a range spanning a root mean square deviation of 1-2.5 Å with respect to the unbound structure. Because MC sampling is performed to select appropriate partner conformations on the fly the approach is not limited by the number of conformations in the ensemble compared to ensemble docking of each conformer pair in ensemble cross docking. Although only a fraction of generated conformers was in closer agreement with the bound structure the REMC ensemble docking approach achieved improved docking results compared to REMC docking with only the unbound partner structures or using docking energy minimization methods. The approach has significant potential for further improvement in combination with more realistic structural ensembles and better docking scoring functions. Proteins 2017; 85:924-937. © 2016 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. Optimization of beam shaping assembly based on D-T neutron generator and dose evaluation for BNCT

    NASA Astrophysics Data System (ADS)

    Naeem, Hamza; Chen, Chaobin; Zheng, Huaqing; Song, Jing

    2017-04-01

    The feasibility of developing an epithermal neutron beam for a boron neutron capture therapy (BNCT) facility based on a high intensity D-T fusion neutron generator (HINEG) and using the Monte Carlo code SuperMC (Super Monte Carlo simulation program for nuclear and radiation process) is proposed in this study. The Monte Carlo code SuperMC is used to determine and optimize the final configuration of the beam shaping assembly (BSA). The optimal BSA design in a cylindrical geometry which consists of a natural uranium sphere (14 cm) as a neutron multiplier, AlF3 and TiF3 as moderators (20 cm each), Cd (1 mm) as a thermal neutron filter, Bi (5 cm) as a gamma shield, and Pb as a reflector and collimator to guide neutrons towards the exit window. The epithermal neutron beam flux of the proposed model is 5.73 × 109 n/cm2s, and other dosimetric parameters for the BNCT reported by IAEA-TECDOC-1223 have been verified. The phantom dose analysis shows that the designed BSA is accurate, efficient and suitable for BNCT applications. Thus, the Monte Carlo code SuperMC is concluded to be capable of simulating the BSA and the dose calculation for BNCT, and high epithermal flux can be achieved using proposed BSA.

  19. Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas

    2012-04-01

    Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.

  20. Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: a path for the optimization of low-energy many-body bases.

    PubMed

    Reboredo, Fernando A; Kim, Jeongnim

    2014-02-21

    A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.

  1. Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: A path for the optimization of low-energy many-body bases

    NASA Astrophysics Data System (ADS)

    Reboredo, Fernando A.; Kim, Jeongnim

    2014-02-01

    A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.

  2. Identification of Thyroid Receptor Ant/Agonists in Water Sources Using Mass Balance Analysis and Monte Carlo Simulation

    PubMed Central

    Shi, Wei; Wei, Si; Hu, Xin-xin; Hu, Guan-jiu; Chen, Cu-lan; Wang, Xin-ru; Giesy, John P.; Yu, Hong-xia

    2013-01-01

    Some synthetic chemicals, which have been shown to disrupt thyroid hormone (TH) function, have been detected in surface waters and people have the potential to be exposed through water-drinking. Here, the presence of thyroid-active chemicals and their toxic potential in drinking water sources in Yangtze River Delta were investigated by use of instrumental analysis combined with cell-based reporter gene assay. A novel approach was developed to use Monte Carlo simulation, for evaluation of the potential risks of measured concentrations of TH agonists and antagonists and to determine the major contributors to observed thyroid receptor (TR) antagonist potency. None of the extracts exhibited TR agonist potency, while 12 of 14 water samples exhibited TR antagonistic potency. The most probable observed antagonist equivalents ranged from 1.4 to 5.6 µg di-n-butyl phthalate (DNBP)/L, which posed potential risk in water sources. Based on Monte Carlo simulation related mass balance analysis, DNBP accounted for 64.4% for the entire observed antagonist toxic unit in water sources, while diisobutyl phthalate (DIBP), di-n-octyl phthalate (DNOP) and di-2-ethylhexyl phthalate (DEHP) also contributed. The most probable observed equivalent and most probable relative potency (REP) derived from Monte Carlo simulation is useful for potency comparison and responsible chemicals screening. PMID:24204563

  3. Static and dynamic behavior of a Si/Si0.8Ge0.2/Si heterojunction bipolar transistor using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Galdin, Sylvie; Dollfus, Philippe; Hesto, Patrice

    1994-03-01

    A theoretical study of a Si/Si1-xGex/Si heterojunction bipolar transistor using Monte Carlo simulations is reported. The geometry and composition of the emitter-base junction are optimized using one-dimensional simulations with a view to improving electron transport in the base. It is proposed to introduce a thin Si-P spacer layer, between the Si-N emitter and the SiGe-P base, which allows launching hot electrons into the base despite the lack of natural conduction-band discontinuity between Si and strain SiGe. The high-frequency behavior of the complete transistor is then studied using 2D modeling. A method of microwave analysis using small signal Monte Carlo simulations that consists of expanding the terminal currents in Fourier series is presented. A cutoff frequency fT of 68 GHz has been extracted. Finally, the occurrence of a parasitic electron barrier at the collector-base junction is responsible for the fT fall-off at high collector current density. This parasitic barrier is lowered through the influence of the collector potential.

  4. Conditional Monte Carlo randomization tests for regression models.

    PubMed

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Summarizing Monte Carlo Results in Methodological Research.

    ERIC Educational Resources Information Center

    Harwell, Michael R.

    Monte Carlo studies of statistical tests are prominently featured in the methodological research literature. Unfortunately, the information from these studies does not appear to have significantly influenced methodological practice in educational and psychological research. One reason is that Monte Carlo studies lack an overarching theory to guide…

  6. Soybean Physiology Calibration in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Drewniak, B. A.; Bilionis, I.; Constantinescu, E. M.

    2014-12-01

    With the large influence of agricultural land use on biophysical and biogeochemical cycles, integrating cultivation into Earth System Models (ESMs) is increasingly important. The Community Land Model (CLM) was augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. However, the strong nonlinearity of ESMs makes parameter fitting a difficult task. In this study, our goal is to calibrate ten of the CLM-Crop parameters for one crop type, soybean, in order to improve model projection of plant development and carbon fluxes. We used measurements of gross primary productivity, net ecosystem exchange, and plant biomass from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC). Our scheme can perform model calibration using very few evaluations and, by exploiting parallelism, at a fraction of the time required by plain vanilla Markov Chain Monte Carlo (MCMC). We present the results from a twin experiment (self-validation) and calibration results and validation using real observations from an AmeriFlux tower site in the Midwestern United States, for the soybean crop type. The improved model will help researchers understand how climate affects crop production and resulting carbon fluxes, and additionally, how cultivation impacts climate.

  7. A study of microkinetic adjustments required to match shock wave experiments and Monte Carlo Direct Simulation for a wide Mach number range

    NASA Technical Reports Server (NTRS)

    Pham-Van-diep, Gerald C.; Muntz, E. Phillip; Erwin, Daniel A.

    1990-01-01

    Shock wave thickness predictions from Monte Carlo Direct Simulations, using differential scattering and the Maitland-Smith-Aziz interatomic potential, underpredict experiments as shock Mach numbers increase above about 4. Examination of several sources of data has indicated that at relatively high energies the repulsive portion of accepted potentials such as the Maitland-Smith-Aziz may be too steep. An Exponential-6 potential due to Ross, based on high energy molecular beam scattering data and shock velocity measurements in liquid argon, has been combined with the lower energy portion of the Maitland-Smith-Aziz potential. When this hybrid potential is used in Monte Carlo Direct Simulations, agreement with experiments is improved over the previous predictions using the pure Maitland-Smith-Aziz form.

  8. Steady-State Electrodiffusion from the Nernst-Planck Equation Coupled to Local Equilibrium Monte Carlo Simulations.

    PubMed

    Boda, Dezső; Gillespie, Dirk

    2012-03-13

    We propose a procedure to compute the steady-state transport of charged particles based on the Nernst-Planck (NP) equation of electrodiffusion. To close the NP equation and to establish a relation between the concentration and electrochemical potential profiles, we introduce the Local Equilibrium Monte Carlo (LEMC) method. In this method, Grand Canonical Monte Carlo simulations are performed using the electrochemical potential specified for the distinct volume elements. An iteration procedure that self-consistently solves the NP and flux continuity equations with LEMC is shown to converge quickly. This NP+LEMC technique can be used in systems with diffusion of charged or uncharged particles in complex three-dimensional geometries, including systems with low concentrations and small applied voltages that are difficult for other particle simulation techniques.

  9. Quantum Monte Carlo Simulation of Frustrated Kondo Lattice Models

    NASA Astrophysics Data System (ADS)

    Sato, Toshihiro; Assaad, Fakher F.; Grover, Tarun

    2018-03-01

    The absence of the negative sign problem in quantum Monte Carlo simulations of spin and fermion systems has different origins. World-line based algorithms for spins require positivity of matrix elements whereas auxiliary field approaches for fermions depend on symmetries such as particle-hole symmetry. For negative-sign-free spin and fermionic systems, we show that one can formulate a negative-sign-free auxiliary field quantum Monte Carlo algorithm that allows Kondo coupling of fermions with the spins. Using this general approach, we study a half-filled Kondo lattice model on the honeycomb lattice with geometric frustration. In addition to the conventional Kondo insulator and antiferromagnetically ordered phases, we find a partial Kondo screened state where spins are selectively screened so as to alleviate frustration, and the lattice rotation symmetry is broken nematically.

  10. Parameters Free Computational Characterization of Defects in Transition Metal Oxides with Diffusion Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Santana, Juan A.; Krogel, Jaron T.; Kent, Paul R.; Reboredo, Fernando

    Materials based on transition metal oxides (TMO's) are among the most challenging systems for computational characterization. Reliable and practical computations are possible by directly solving the many-body problem for TMO's with quantum Monte Carlo (QMC) methods. These methods are very computationally intensive, but recent developments in algorithms and computational infrastructures have enabled their application to real materials. We will show our efforts on the application of the diffusion quantum Monte Carlo (DMC) method to study the formation of defects in binary and ternary TMO and heterostructures of TMO. We will also outline current limitations in hardware and algorithms. This work is supported by the Materials Sciences & Engineering Division of the Office of Basic Energy Sciences, U.S. Department of Energy (DOE).

  11. Radiative interactions in chemically reacting compressible nozzle flows using Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Liu, J.; Tiwari, Surendra N.

    1994-01-01

    The two-dimensional spatially elliptic Navier-Stokes equations have been used to investigate the radiative interactions in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The radiative heat transfer term in the energy equation is simulated using the Monte Carlo method (MCM). The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The spectral correlation has been considered in the Monte Carlo formulations. Results obtained demonstrate that the effect of radiation on the flow field is minimal but its effect on the wall heat transfer is significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and the nozzle size on the radiative and conductive wall fluxes.

  12. Benchmarking and validation of a Geant4-SHADOW Monte Carlo simulation for dose calculations in microbeam radiation therapy.

    PubMed

    Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael

    2014-05-01

    Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.

  13. A point kernel algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Debus, Charlotte; Oelfke, Uwe; Bartzsch, Stefan

    2017-11-01

    Microbeam radiation therapy (MRT) is a treatment approach in radiation therapy where the treatment field is spatially fractionated into arrays of a few tens of micrometre wide planar beams of unusually high peak doses separated by low dose regions of several hundred micrometre width. In preclinical studies, this treatment approach has proven to spare normal tissue more effectively than conventional radiation therapy, while being equally efficient in tumour control. So far dose calculations in MRT, a prerequisite for future clinical applications are based on Monte Carlo simulations. However, they are computationally expensive, since scoring volumes have to be small. In this article a kernel based dose calculation algorithm is presented that splits the calculation into photon and electron mediated energy transport, and performs the calculation of peak and valley doses in typical MRT treatment fields within a few minutes. Kernels are analytically calculated depending on the energy spectrum and material composition. In various homogeneous materials peak, valley doses and microbeam profiles are calculated and compared to Monte Carlo simulations. For a microbeam exposure of an anthropomorphic head phantom calculated dose values are compared to measurements and Monte Carlo calculations. Except for regions close to material interfaces calculated peak dose values match Monte Carlo results within 4% and valley dose values within 8% deviation. No significant differences are observed between profiles calculated by the kernel algorithm and Monte Carlo simulations. Measurements in the head phantom agree within 4% in the peak and within 10% in the valley region. The presented algorithm is attached to the treatment planning platform VIRTUOS. It was and is used for dose calculations in preclinical and pet-clinical trials at the biomedical beamline ID17 of the European synchrotron radiation facility in Grenoble, France.

  14. Adaptive measurement selection for progressive damage estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Wenfan; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Chattopadhyay, Aditi; Peralta, Pedro

    2011-04-01

    Noise and interference in sensor measurements degrade the quality of data and have a negative impact on the performance of structural damage diagnosis systems. In this paper, a novel adaptive measurement screening approach is presented to automatically select the most informative measurements and use them intelligently for structural damage estimation. The method is implemented efficiently in a sequential Monte Carlo (SMC) setting using particle filtering. The noise suppression and improved damage estimation capability of the proposed method is demonstrated by an application to the problem of estimating progressive fatigue damage in an aluminum compact-tension (CT) sample using noisy PZT sensor measurements.

  15. Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model

    NASA Astrophysics Data System (ADS)

    Al Sobhi, Mashail M.

    2015-02-01

    Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.

  16. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less

  17. Intra-individual diagnostic image quality and organ-specific-radiation dose comparison between spiral cCT with iterative image reconstruction and z-axis automated tube current modulation and sequential cCT.

    PubMed

    Wenz, Holger; Maros, Máté E; Meyer, Mathias; Gawlitza, Joshua; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Groden, Christoph; Henzler, Thomas

    2016-01-01

    To prospectively evaluate image quality and organ-specific-radiation dose of spiral cranial CT (cCT) combined with automated tube current modulation (ATCM) and iterative image reconstruction (IR) in comparison to sequential tilted cCT reconstructed with filtered back projection (FBP) without ATCM. 31 patients with a previous performed tilted non-contrast enhanced sequential cCT aquisition on a 4-slice CT system with only FBP reconstruction and no ATCM were prospectively enrolled in this study for a clinical indicated cCT scan. All spiral cCT examinations were performed on a 3rd generation dual-source CT system using ATCM in z-axis direction. Images were reconstructed using both, FBP and IR (level 1-5). A Monte-Carlo-simulation-based analysis was used to compare organ-specific-radiation dose. Subjective image quality for various anatomic structures was evaluated using a 4-point Likert-scale and objective image quality was evaluated by comparing signal-to-noise ratios (SNR). Spiral cCT led to a significantly lower (p < 0.05) organ-specific-radiation dose in all targets including eye lense. Subjective image quality of spiral cCT datasets with an IR reconstruction level 5 was rated significantly higher compared to the sequential cCT acquisitions (p < 0.0001). Consecutive mean SNR was significantly higher in all spiral datasets (FBP, IR 1-5) when compared to sequential cCT with a mean SNR improvement of 44.77% (p < 0.0001). Spiral cCT combined with ATCM and IR allows for significant-radiation dose reduction including a reduce eye lens organ-dose when compared to a tilted sequential cCT while improving subjective and objective image quality.

  18. A Primer in Monte Carlo Integration Using Mathcad

    ERIC Educational Resources Information Center

    Hoyer, Chad E.; Kegerreis, Jeb S.

    2013-01-01

    The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…

  19. Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.

  20. Numerical integration of detector response functions via Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less

  1. Numerical integration of detector response functions via Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.

    2017-09-01

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.

  2. Numerical integration of detector response functions via Monte Carlo simulations

    DOE PAGES

    Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...

    2017-06-13

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less

  3. Monte Carlo simulations in X-ray imaging

    NASA Astrophysics Data System (ADS)

    Giersch, Jürgen; Durst, Jürgen

    2008-06-01

    Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.

  4. Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept

    NASA Technical Reports Server (NTRS)

    Thipphavong, David

    2010-01-01

    Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.

  5. EDITORIAL: Special section: Selected papers from the Second European Workshop on Monte Carlo Treatment Planning (MCTP2009) Special section: Selected papers from the Second European Workshop on Monte Carlo Treatment Planning (MCTP2009)

    NASA Astrophysics Data System (ADS)

    Spezi, Emiliano

    2010-08-01

    Sixty years after the paper 'The Monte Carlo method' by N Metropolis and S Ulam in The Journal of the American Statistical Association (Metropolis and Ulam 1949), use of the most accurate algorithm for computer modelling of radiotherapy linear accelerators, radiation detectors and three dimensional patient dose was discussed in Wales (UK). The Second European Workshop on Monte Carlo Treatment Planning (MCTP2009) was held at the National Museum of Wales in Cardiff. The event, organized by Velindre NHS Trust, Cardiff University and Cancer Research Wales, lasted two and a half days, during which leading experts and contributing authors presented and discussed the latest advances in the field of Monte Carlo treatment planning (MCTP). MCTP2009 was highly successful, judging from the number of participants which was in excess of 140. Of the attendees, 24% came from the UK, 46% from the rest of Europe, 12% from North America and 18% from the rest of the World. Fifty-three oral presentations and 24 posters were delivered in a total of 12 scientific sessions. MCTP2009 follows the success of previous similar initiatives (Verhaegen and Seuntjens 2005, Reynaert 2007, Verhaegen and Seuntjens 2008), and confirms the high level of interest in Monte Carlo technology for radiotherapy treatment planning. The 13 articles selected for this special section (following Physics in Medicine and Biology's usual rigorous peer-review procedure) give a good picture of the high quality of the work presented at MCTP2009. The book of abstracts can be downloaded from http://www.mctp2009.org. I wish to thank the IOP Medical Physics and Computational Physics Groups for their financial support, Elekta Ltd and Dosisoft for sponsoring MCTP2009, and leading manufacturers such as BrainLab, Nucletron and Varian for showcasing their latest MC-based radiotherapy solutions during a dedicated technical session. I am also very grateful to the eight invited speakers who kindly accepted to give keynote presentations which contributed significantly to raising the quality of the event and capturing the interest of the medical physics community. I also wish to thank all those who contributed to the success of MCTP2009: the members of the local Organizing Committee and the Workshop Management Team who managed the event very efficiently, the members of the European Working Group in Monte Carlo Treatment Planning (EWG-MCTP) who acted as Guest Associate Editors for the MCTP2009 abstracts reviewing process, and all the authors who generated new, high quality work. Finally, I hope that you find the contents of this special section enjoyable and informative. Emiliano Spezi Chairman of MCTP2009 Organizing Committee and Guest Editor References Metropolis N and Ulam S 1949 The Monte Carlo method J. Amer. Stat. Assoc. 44 335-41 Reynaert N 2007 First European Workshop on Monte Carlo Treatment Planning J. Phys.: Conf. Ser. 74 011001 Verhaegen F and Seuntjens J 2005 International Workshop on Current Topics in Monte Carlo Treatment Planning Phys. Med. Biol. 50 Verhaegen F and Seuntjens J 2008 International Workshop on Monte Carlo Techniques in Radiotherapy Delivery and Verification J. Phys.: Conf. Ser. 102 011001

  6. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-07

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  7. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  8. A New Approach to Integrate GPU-based Monte Carlo Simulation into Inverse Treatment Plan Optimization for Proton Therapy

    PubMed Central

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2016-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456

  9. The anesthetic action of some polyhalogenated ethers-Monte Carlo method based QSAR study.

    PubMed

    Golubović, Mlađan; Lazarević, Milan; Zlatanović, Dragan; Krtinić, Dane; Stoičkov, Viktor; Mladenović, Bojan; Milić, Dragan J; Sokolović, Dušan; Veselinović, Aleksandar M

    2018-04-13

    Up to this date, there has been an ongoing debate about the mode of action of general anesthetics, which have postulated many biological sites as targets for their action. However, postoperative nausea and vomiting are common problems in which inhalational agents may have a role in their development. When a mode of action is unknown, QSAR modelling is essential in drug development. To investigate the aspects of their anesthetic, QSAR models based on the Monte Carlo method were developed for a set of polyhalogenated ethers. Until now, their anesthetic action has not been completely defined, although some hypotheses have been suggested. Therefore, a QSAR model should be developed on molecular fragments that contribute to anesthetic action. QSAR models were built on the basis of optimal molecular descriptors based on the SMILES notation and local graph invariants, whereas the Monte Carlo optimization method with three random splits into the training and test set was applied for model development. Different methods, including novel Index of ideality correlation, were applied for the determination of the robustness of the model and its predictive potential. The Monte Carlo optimization process was capable of being an efficient in silico tool for building up a robust model of good statistical quality. Molecular fragments which have both positive and negative influence on anesthetic action were determined. The presented study can be useful in the search for novel anesthetics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Thermal transport in nanocrystalline Si and SiGe by ab initio based Monte Carlo simulation.

    PubMed

    Yang, Lina; Minnich, Austin J

    2017-03-14

    Nanocrystalline thermoelectric materials based on Si have long been of interest because Si is earth-abundant, inexpensive, and non-toxic. However, a poor understanding of phonon grain boundary scattering and its effect on thermal conductivity has impeded efforts to improve the thermoelectric figure of merit. Here, we report an ab-initio based computational study of thermal transport in nanocrystalline Si-based materials using a variance-reduced Monte Carlo method with the full phonon dispersion and intrinsic lifetimes from first-principles as input. By fitting the transmission profile of grain boundaries, we obtain excellent agreement with experimental thermal conductivity of nanocrystalline Si [Wang et al. Nano Letters 11, 2206 (2011)]. Based on these calculations, we examine phonon transport in nanocrystalline SiGe alloys with ab-initio electron-phonon scattering rates. Our calculations show that low energy phonons still transport substantial amounts of heat in these materials, despite scattering by electron-phonon interactions, due to the high transmission of phonons at grain boundaries, and thus improvements in ZT are still possible by disrupting these modes. This work demonstrates the important insights into phonon transport that can be obtained using ab-initio based Monte Carlo simulations in complex nanostructured materials.

  11. Thermal transport in nanocrystalline Si and SiGe by ab initio based Monte Carlo simulation

    PubMed Central

    Yang, Lina; Minnich, Austin J.

    2017-01-01

    Nanocrystalline thermoelectric materials based on Si have long been of interest because Si is earth-abundant, inexpensive, and non-toxic. However, a poor understanding of phonon grain boundary scattering and its effect on thermal conductivity has impeded efforts to improve the thermoelectric figure of merit. Here, we report an ab-initio based computational study of thermal transport in nanocrystalline Si-based materials using a variance-reduced Monte Carlo method with the full phonon dispersion and intrinsic lifetimes from first-principles as input. By fitting the transmission profile of grain boundaries, we obtain excellent agreement with experimental thermal conductivity of nanocrystalline Si [Wang et al. Nano Letters 11, 2206 (2011)]. Based on these calculations, we examine phonon transport in nanocrystalline SiGe alloys with ab-initio electron-phonon scattering rates. Our calculations show that low energy phonons still transport substantial amounts of heat in these materials, despite scattering by electron-phonon interactions, due to the high transmission of phonons at grain boundaries, and thus improvements in ZT are still possible by disrupting these modes. This work demonstrates the important insights into phonon transport that can be obtained using ab-initio based Monte Carlo simulations in complex nanostructured materials. PMID:28290484

  12. Discrete Diffusion Monte Carlo for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory

    2014-10-01

    The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.

  13. Cell-veto Monte Carlo algorithm for long-range systems.

    PubMed

    Kapfer, Sebastian C; Krauth, Werner

    2016-09-01

    We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.

  14. Nuclide Depletion Capabilities in the Shift Monte Carlo Code

    DOE PAGES

    Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...

    2017-12-21

    A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.

  15. Ground state of excitonic molecules by the Green's-function Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, M.A.; Vashishta, P.; Kalia, R.K.

    1983-12-26

    The ground-state energy of excitonic molecules is evaluated as a function of the ratio of electron and hole masses, sigma, with use of the Green's-function Monte Carlo method. For all sigma, the Green's-function Monte Carlo energies are significantly lower than the variational estimates and in favorable agreement with experiments. In excitonic rydbergs, the binding energy of the positronium molecule (sigma = 1) is predicted to be -0.06 and for sigma<<1, the Green's-function Monte Carlo energies agree with the ''exact'' limiting behavior, E = -2.346+0.764sigma.

  16. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*

    PubMed Central

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2014-01-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image-based dosimetry in nuclear medicine. PMID:24200697

  17. Monte Carlo-based interval transformation analysis for multi-criteria decision analysis of groundwater management strategies under uncertain naphthalene concentrations and health risks

    NASA Astrophysics Data System (ADS)

    Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong

    2016-08-01

    A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.

  18. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.

    1993-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.

  19. Output factor determination based on Monte Carlo simulation for small cone field in 10-MV photon beam.

    PubMed

    Fukata, Kyohei; Sugimoto, Satoru; Kurokawa, Chie; Saito, Akito; Inoue, Tatsuya; Sasai, Keisuke

    2018-06-01

    The difficulty of measuring output factor (OPF) in a small field has been frequently discussed in recent publications. This study is aimed to determine the OPF in a small field using 10-MV photon beam and stereotactic conical collimator (cone). The OPF was measured by two diode detectors (SFD, EDGE detector) and one micro-ion chamber (PinPoint 3D chamber) in a water phantom. A Monte Carlo simulation using simplified detector model was performed to obtain the correction factor for the detector measurements. About 12% OPF difference was observed in the measurement at the smallest field (7.5 mm diameter) for EDGE detector and PinPoint 3D chamber. By applying the Monte Carlo-based correction factor to the measurement, the maximum discrepancy among the three detectors was reduced to within 3%. The results indicate that determination of OPF in a small field should be carefully performed. Especially, detector choice and appropriate correction factor application are very important in this regard.

  20. Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque

    NASA Astrophysics Data System (ADS)

    Klaus, Leonard; Eichstädt, Sascha

    2018-04-01

    For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.

  1. Monte-Carlo background simulations of present and future detectors in x-ray astronomy

    NASA Astrophysics Data System (ADS)

    Tenzer, C.; Kendziorra, E.; Santangelo, A.

    2008-07-01

    Reaching a low-level and well understood internal instrumental background is crucial for the scientific performance of an X-ray detector and, therefore, a main objective of the instrument designers. Monte-Carlo simulations of the physics processes and interactions taking place in a space-based X-ray detector as a result of its orbital environment can be applied to explain the measured background of existing missions. They are thus an excellent tool to predict and optimize the background of future observatories. Weak points of a design and the main sources of the background can be identified and methods to reduce them can be implemented and studied within the simulations. Using the Geant4 Monte-Carlo toolkit, we have created a simulation environment for space-based detectors and we present results of such background simulations for XMM-Newton's EPIC pn-CCD camera. The environment is also currently used to estimate and optimize the background of the future instruments Simbol-X and eRosita.

  2. The Application of the Monte Carlo Approach to Cognitive Diagnostic Computerized Adaptive Testing With Content Constraints

    ERIC Educational Resources Information Center

    Mao, Xiuzhen; Xin, Tao

    2013-01-01

    The Monte Carlo approach which has previously been implemented in traditional computerized adaptive testing (CAT) is applied here to cognitive diagnostic CAT to test the ability of this approach to address multiple content constraints. The performance of the Monte Carlo approach is compared with the performance of the modified maximum global…

  3. Modifying the Monte Carlo Quiz to Increase Student Motivation, Participation, and Content Retention

    ERIC Educational Resources Information Center

    Simonson, Shawn R.

    2017-01-01

    Fernald developed the Monte Carlo Quiz format to enhance retention, encourage students to prepare for class, read with intention, and organize information in psychology classes. This author modified the Monte Carlo Quiz, combined it with the Minute Paper, and applied it to various courses. Students write quiz questions as part of the Minute Paper…

  4. The Monte Carlo Method. Popular Lectures in Mathematics.

    ERIC Educational Resources Information Center

    Sobol', I. M.

    The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…

  5. Anthology of the Development of Radiation Transport Tools as Applied to Single Event Effects

    NASA Astrophysics Data System (ADS)

    Reed, R. A.; Weller, R. A.; Akkerman, A.; Barak, J.; Culpepper, W.; Duzellier, S.; Foster, C.; Gaillardin, M.; Hubert, G.; Jordan, T.; Jun, I.; Koontz, S.; Lei, F.; McNulty, P.; Mendenhall, M. H.; Murat, M.; Nieminen, P.; O'Neill, P.; Raine, M.; Reddell, B.; Saigné, F.; Santin, G.; Sihver, L.; Tang, H. H. K.; Truscott, P. R.; Wrobel, F.

    2013-06-01

    This anthology contains contributions from eleven different groups, each developing and/or applying Monte Carlo-based radiation transport tools to simulate a variety of effects that result from energy transferred to a semiconductor material by a single particle event. The topics span from basic mechanisms for single-particle induced failures to applied tasks like developing websites to predict on-orbit single event failure rates using Monte Carlo radiation transport tools.

  6. A Wigner Monte Carlo approach to density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sellier, J.M., E-mail: jeanmichel.sellier@gmail.com; Dimov, I.

    2014-08-01

    In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn–Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales verymore » well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn–Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.« less

  7. Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: A path for the optimization of low-energy many-body bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reboredo, Fernando A.; Kim, Jeongnim

    A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspacemore » of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.« less

  8. Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; Simutė, S.

    2017-12-01

    We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.

  9. Coupling of kinetic Monte Carlo simulations of surface reactions to transport in a fluid for heterogeneous catalytic reactor modeling.

    PubMed

    Schaefer, C; Jansen, A P J

    2013-02-07

    We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.

  10. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  11. Hamiltonian Markov Chain Monte Carlo Methods for the CUORE Neutrinoless Double Beta Decay Sensitivity

    NASA Astrophysics Data System (ADS)

    Graham, Eleanor; Cuore Collaboration

    2017-09-01

    The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.

  12. Large-cell Monte Carlo renormalization of irreversible growth processes

    NASA Technical Reports Server (NTRS)

    Nakanishi, H.; Family, F.

    1985-01-01

    Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.

  13. Investigation of radiative interaction in laminar flows using Monte Carlo simulation

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, S. N.

    1993-01-01

    The Monte Carlo method (MCM) is employed to study the radiative interactions in fully developed laminar flow between two parallel plates. Taking advantage of the characteristics of easy mathematical treatment of the MCM, a general numerical procedure is developed for nongray radiative interaction. The nongray model is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. To validate the Monte Carlo simulation for nongray radiation problems, the results of radiative dissipation from the MCM are compared with two available solutions for a given temperature profile between two plates. After this validation, the MCM is employed to solve the present physical problem and results for the bulk temperature are compared with available solutions. In general, good agreement is noted and reasons for some discrepancies in certain ranges of parameters are explained.

  14. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu; Celler, Anna

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming themore » same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90) agreeing within ±3%, on average. Conclusions: Several aspects of OLINDA/EXM dose calculation were compared with patient-specific dose estimates obtained using Monte Carlo. Differences in patient anatomy led to large differences in cross-organ doses. However, total organ doses were still in good agreement since most of the deposited dose is due to self-irradiation. Comparison of voxelized doses calculated by Monte Carlo and the voxel S value technique showed that the 3D dose distributions produced by the respective methods are nearly identical.« less

  15. Gray: a ray tracing-based Monte Carlo simulator for PET.

    PubMed

    Freese, David L; Olcott, Peter D; Buss, Samuel R; Levin, Craig S

    2018-05-21

    Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a [Formula: see text] speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within [Formula: see text]% when accounting for differences in peak NECR. We also estimate the peak NECR to be [Formula: see text] kcps, or within [Formula: see text]% of published experimental data. The activity concentration of the peak is also estimated within 1.3%.

  16. Stochastic Analysis of Orbital Lifetimes of Spacecraft

    NASA Technical Reports Server (NTRS)

    Sasamoto, Washito; Goodliff, Kandyce; Cornelius, David

    2008-01-01

    A document discusses (1) a Monte-Carlo-based methodology for probabilistic prediction and analysis of orbital lifetimes of spacecraft and (2) Orbital Lifetime Monte Carlo (OLMC)--a Fortran computer program, consisting of a previously developed long-term orbit-propagator integrated with a Monte Carlo engine. OLMC enables modeling of variances of key physical parameters that affect orbital lifetimes through the use of probability distributions. These parameters include altitude, speed, and flight-path angle at insertion into orbit; solar flux; and launch delays. The products of OLMC are predicted lifetimes (durations above specified minimum altitudes) for the number of user-specified cases. Histograms generated from such predictions can be used to determine the probabilities that spacecraft will satisfy lifetime requirements. The document discusses uncertainties that affect modeling of orbital lifetimes. Issues of repeatability, smoothness of distributions, and code run time are considered for the purpose of establishing values of code-specific parameters and number of Monte Carlo runs. Results from test cases are interpreted as demonstrating that solar-flux predictions are primary sources of variations in predicted lifetimes. Therefore, it is concluded, multiple sets of predictions should be utilized to fully characterize the lifetime range of a spacecraft.

  17. A review: Functional near infrared spectroscopy evaluation in muscle tissues using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Halim, A. A. A.; Laili, M. H.; Salikin, M. S.; Rusop, M.

    2018-05-01

    Monte Carlo Simulation has advanced their quantification based on number of the photon counting to solve the propagation of light inside the tissues including the absorption, scattering coefficient and act as preliminary study for functional near infrared application. The goal of this paper is to identify the optical properties using Monte Carlo simulation for non-invasive functional near infrared spectroscopy (fNIRS) evaluation of penetration depth in human muscle. This paper will describe the NIRS principle and the basis for its proposed used in Monte Carlo simulation which focused on several important parameters include ATP, ADP and relate with blow flow and oxygen content at certain exercise intensity. This will cover the advantages and limitation of such application upon this simulation. This result may help us to prove that our human muscle is transparent to this near infrared region and could deliver a lot of information regarding to the oxygenation level in human muscle. Thus, this might be useful for non-invasive technique for detecting oxygen status in muscle from living people either athletes or working people and allowing a lots of investigation muscle physiology in future.

  18. A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics

    NASA Astrophysics Data System (ADS)

    Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger

    2017-09-01

    Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.

  19. Experimental validation of a Monte Carlo proton therapy nozzle model incorporating magnetically steered protons.

    PubMed

    Peterson, S W; Polf, J; Bues, M; Ciangaru, G; Archambault, L; Beddar, S; Smith, A

    2009-05-21

    The purpose of this study is to validate the accuracy of a Monte Carlo calculation model of a proton magnetic beam scanning delivery nozzle developed using the Geant4 toolkit. The Monte Carlo model was used to produce depth dose and lateral profiles, which were compared to data measured in the clinical scanning treatment nozzle at several energies. Comparisons were also made between measured and simulated off-axis profiles to test the accuracy of the model's magnetic steering. Comparison of the 80% distal dose fall-off values for the measured and simulated depth dose profiles agreed to within 1 mm for the beam energies evaluated. Agreement of the full width at half maximum values for the measured and simulated lateral fluence profiles was within 1.3 mm for all energies. The position of measured and simulated spot positions for the magnetically steered beams agreed to within 0.7 mm of each other. Based on these results, we found that the Geant4 Monte Carlo model of the beam scanning nozzle has the ability to accurately predict depth dose profiles, lateral profiles perpendicular to the beam axis and magnetic steering of a proton beam during beam scanning proton therapy.

  20. Bayesian statistics and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  1. An Ensemble-Based Smoother with Retrospectively Updated Weights for Highly Nonlinear Systems

    NASA Technical Reports Server (NTRS)

    Chin, T. M.; Turmon, M. J.; Jewell, J. B.; Ghil, M.

    2006-01-01

    Monte Carlo computational methods have been introduced into data assimilation for nonlinear systems in order to alleviate the computational burden of updating and propagating the full probability distribution. By propagating an ensemble of representative states, algorithms like the ensemble Kalman filter (EnKF) and the resampled particle filter (RPF) rely on the existing modeling infrastructure to approximate the distribution based on the evolution of this ensemble. This work presents an ensemble-based smoother that is applicable to the Monte Carlo filtering schemes like EnKF and RPF. At the minor cost of retrospectively updating a set of weights for ensemble members, this smoother has demonstrated superior capabilities in state tracking for two highly nonlinear problems: the double-well potential and trivariate Lorenz systems. The algorithm does not require retrospective adaptation of the ensemble members themselves, and it is thus suited to a streaming operational mode. The accuracy of the proposed backward-update scheme in estimating non-Gaussian distributions is evaluated by comparison to the more accurate estimates provided by a Markov chain Monte Carlo algorithm.

  2. Monte Carlo simulations within avalanche rescue

    NASA Astrophysics Data System (ADS)

    Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg

    2016-04-01

    Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.

  3. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  4. Use of Fluka to Create Dose Calculations

    NASA Technical Reports Server (NTRS)

    Lee, Kerry T.; Barzilla, Janet; Townsend, Lawrence; Brittingham, John

    2012-01-01

    Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm^2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared against well-known results and against the results of other deterministic and Monte Carlo codes. Results will be presented.

  5. Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model

    NASA Astrophysics Data System (ADS)

    Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.

    2018-04-01

    While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.

  6. A modified Monte Carlo model for the ionospheric heating rates

    NASA Technical Reports Server (NTRS)

    Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.

    1972-01-01

    A Monte Carlo method is adopted as a basis for the derivation of the photoelectron heat input into the ionospheric plasma. This approach is modified in an attempt to minimize the computation time. The heat input distributions are computed for arbitrarily small source elements that are spaced at distances apart corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by more than a factor of three, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.

  7. QMCPACK: an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids

    NASA Astrophysics Data System (ADS)

    Kim, Jeongnim; Baczewski, Andrew D.; Beaudet, Todd D.; Benali, Anouar; Chandler Bennett, M.; Berrill, Mark A.; Blunt, Nick S.; Josué Landinez Borda, Edgar; Casula, Michele; Ceperley, David M.; Chiesa, Simone; Clark, Bryan K.; Clay, Raymond C., III; Delaney, Kris T.; Dewing, Mark; Esler, Kenneth P.; Hao, Hongxia; Heinonen, Olle; Kent, Paul R. C.; Krogel, Jaron T.; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M. Graham; Luo, Ye; Malone, Fionn D.; Martin, Richard M.; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A.; Mitas, Lubos; Morales, Miguel A.; Neuscamman, Eric; Parker, William D.; Pineda Flores, Sergio D.; Romero, Nichols A.; Rubenstein, Brenda M.; Shea, Jacqueline A. R.; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F.; Townsend, Joshua P.; Tubman, Norm M.; Van Der Goetz, Brett; Vincent, Jordan E.; ChangMo Yang, D.; Yang, Yubo; Zhang, Shuai; Zhao, Luning

    2018-05-01

    QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater–Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.

  8. QMCPACK: an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids.

    PubMed

    Kim, Jeongnim; Baczewski, Andrew T; Beaudet, Todd D; Benali, Anouar; Bennett, M Chandler; Berrill, Mark A; Blunt, Nick S; Borda, Edgar Josué Landinez; Casula, Michele; Ceperley, David M; Chiesa, Simone; Clark, Bryan K; Clay, Raymond C; Delaney, Kris T; Dewing, Mark; Esler, Kenneth P; Hao, Hongxia; Heinonen, Olle; Kent, Paul R C; Krogel, Jaron T; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M Graham; Luo, Ye; Malone, Fionn D; Martin, Richard M; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A; Mitas, Lubos; Morales, Miguel A; Neuscamman, Eric; Parker, William D; Pineda Flores, Sergio D; Romero, Nichols A; Rubenstein, Brenda M; Shea, Jacqueline A R; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F; Townsend, Joshua P; Tubman, Norm M; Van Der Goetz, Brett; Vincent, Jordan E; Yang, D ChangMo; Yang, Yubo; Zhang, Shuai; Zhao, Luning

    2018-05-16

    QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program's capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.

  9. A method for radiological characterization based on fluence conversion coefficients

    NASA Astrophysics Data System (ADS)

    Froeschl, Robert

    2018-06-01

    Radiological characterization of components in accelerator environments is often required to ensure adequate radiation protection during maintenance, transport and handling as well as for the selection of the proper disposal pathway. The relevant quantities are typical the weighted sums of specific activities with radionuclide-specific weighting coefficients. Traditional methods based on Monte Carlo simulations are radionuclide creation-event based or the particle fluences in the regions of interest are scored and then off-line weighted with radionuclide production cross sections. The presented method bases the radiological characterization on a set of fluence conversion coefficients. For a given irradiation profile and cool-down time, radionuclide production cross-sections, material composition and radionuclide-specific weighting coefficients, a set of particle type and energy dependent fluence conversion coefficients is computed. These fluence conversion coefficients can then be used in a Monte Carlo transport code to perform on-line weighting to directly obtain the desired radiological characterization, either by using built-in multiplier features such as in the PHITS code or by writing a dedicated user routine such as for the FLUKA code. The presented method has been validated against the standard event-based methods directly available in Monte Carlo transport codes.

  10. A parallel computational model for GATE simulations.

    PubMed

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. RECORDS: improved Reporting of montE CarlO RaDiation transport Studies: Report of the AAPM Research Committee Task Group 268.

    PubMed

    Sechopoulos, Ioannis; Rogers, D W O; Bazalova-Carter, Magdalena; Bolch, Wesley E; Heath, Emily C; McNitt-Gray, Michael F; Sempau, Josep; Williamson, Jeffrey F

    2018-01-01

    Studies involving Monte Carlo simulations are common in both diagnostic and therapy medical physics research, as well as other fields of basic and applied science. As with all experimental studies, the conditions and parameters used for Monte Carlo simulations impact their scope, validity, limitations, and generalizability. Unfortunately, many published peer-reviewed articles involving Monte Carlo simulations do not provide the level of detail needed for the reader to be able to properly assess the quality of the simulations. The American Association of Physicists in Medicine Task Group #268 developed guidelines to improve reporting of Monte Carlo studies in medical physics research. By following these guidelines, manuscripts submitted for peer-review will include a level of relevant detail that will increase the transparency, the ability to reproduce results, and the overall scientific value of these studies. The guidelines include a checklist of the items that should be included in the Methods, Results, and Discussion sections of manuscripts submitted for peer-review. These guidelines do not attempt to replace the journal reviewer, but rather to be a tool during the writing and review process. Given the varied nature of Monte Carlo studies, it is up to the authors and the reviewers to use this checklist appropriately, being conscious of how the different items apply to each particular scenario. It is envisioned that this list will be useful both for authors and for reviewers, to help ensure the adequate description of Monte Carlo studies in the medical physics literature. © 2017 American Association of Physicists in Medicine.

  12. Analysis of Naval Ammunition Stock Positioning

    DTIC Science & Technology

    2015-12-01

    model takes once the Monte -Carlo simulation determines the assigned probabilities for site-to-site locations. Column two shows how the simulation...stockpiles and positioning them at coastal Navy facilities. A Monte -Carlo simulation model was developed to simulate expected cost and delivery...TERMS supply chain management, Monte -Carlo simulation, risk, delivery performance, stock positioning 15. NUMBER OF PAGES 85 16. PRICE CODE 17

  13. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    ERIC Educational Resources Information Center

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  14. [Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].

    PubMed

    Furuta, Takuya

    2017-01-01

    Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.

  15. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  16. Deterministic theory of Monte Carlo variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ueki, T.; Larsen, E.W.

    1996-12-31

    The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less

  17. Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias

    2015-01-01

    Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.

  18. Development of a Monte Carlo Simulation for APD-Based PET Detectors Using a Continuous Scintillating Crystal

    NASA Astrophysics Data System (ADS)

    Clowes, P.; Mccallum, S.; Welch, A.

    2006-10-01

    We are currently developing a multilayer avalanche photodiode (APD)-based detector for use in positron emission tomography (PET), which utilizes thin continuous crystals. In this paper, we developed a Monte Carlo-based simulation to aid in the design of such detectors. We measured the performance of a detector comprising a single thin continuous crystal (3.1 mm times 9.5 mm times 9.5 mm) of lutetium yttrium ortho-silicate (LYSO) and an APD array (4times4) elements; each element 1.6 mm2 and on a 2.3 mm pitch. We showed that a spatial resolution of better than 2.12 mm is achievable throughout the crystal provided that we adopt a Statistics Based Positioning (SBP) Algorithm. We then used Monte Carlo simulation to model the behavior of the detector. The accuracy of the Monte Carlo simulation was verified by comparing measured and simulated parent datasets (PDS) for the SBP algorithm. These datasets consisted of data for point sources at 49 positions uniformly distributed over the detector area. We also calculated the noise in the detector circuit and verified this value by measurement. The noise value was included in the simulation. We show that the performance of the simulation closely matches the measured performance. The simulations were extended to investigate the effect of different noise levels on positioning accuracy. This paper showed that if modest improvements could be made in the circuit noise then positioning accuracy would be greatly improved. In summary, we have developed a model that can be used to simulate the performance of a variety of APD-based continuous crystal PET detectors

  19. High-efficiency wavefunction updates for large scale Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed

    Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.

  20. SU-E-T-586: Field Size Dependence of Output Factor for Uniform Scanning Proton Beams: A Comparison of TPS Calculation, Measurement and Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Y; Singh, H; Islam, M

    2014-06-01

    Purpose: Output dependence on field size for uniform scanning beams, and the accuracy of treatment planning system (TPS) calculation are not well studied. The purpose of this work is to investigate the dependence of output on field size for uniform scanning beams and compare it among TPS calculation, measurements and Monte Carlo simulations. Methods: Field size dependence was studied using various field sizes between 2.5 cm diameter to 10 cm diameter. The field size factor was studied for a number of proton range and modulation combinations based on output at the center of spread out Bragg peak normalized to amore » 10 cm diameter field. Three methods were used and compared in this study: 1) TPS calculation, 2) ionization chamber measurement, and 3) Monte Carlos simulation. The XiO TPS (Electa, St. Louis) was used to calculate the output factor using a pencil beam algorithm; a pinpoint ionization chamber was used for measurements; and the Fluka code was used for Monte Carlo simulations. Results: The field size factor varied with proton beam parameters, such as range, modulation, and calibration depth, and could decrease over 10% from a 10 cm to 3 cm diameter field for a large range proton beam. The XiO TPS predicted the field size factor relatively well at large field size, but could differ from measurements by 5% or more for small field and large range beams. Monte Carlo simulations predicted the field size factor within 1.5% of measurements. Conclusion: Output factor can vary largely with field size, and needs to be accounted for accurate proton beam delivery. This is especially important for small field beams such as in stereotactic proton therapy, where the field size dependence is large and TPS calculation is inaccurate. Measurements or Monte Carlo simulations are recommended for output determination for such cases.« less

  1. SU-E-T-171: Evaluation of the Analytical Anisotropic Algorithm in a Small Finger Joint Phantom Using Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, J; Owrangi, A; Jiang, R

    2014-06-01

    Purpose: This study investigated the performance of the anisotropic analytical algorithm (AAA) in dose calculation in radiotherapy concerning a small finger joint. Monte Carlo simulation (EGSnrc code) was used in this dosimetric evaluation. Methods: Heterogeneous finger joint phantom containing a vertical water layer (bone joint or cartilage) sandwiched by two bones with dimension 2 × 2 × 2 cm{sup 3} was irradiated by the 6 MV photon beams (field size = 4 × 4 cm{sup 2}). The central beam axis was along the length of the bone joint and the isocenter was set to the center of the joint. Themore » joint width and beam angle were varied from 0.5–2 mm and 0°–15°, respectively. Depth doses were calculated using the AAA and DOSXYZnrc. For dosimetric comparison and normalization, dose calculations were repeated in water phantom using the same beam geometry. Results: Our AAA and Monte Carlo results showed that the AAA underestimated the joint doses by 10%–20%, and could not predict joint dose variation with changes of joint width and beam angle. The calculated bone dose enhancement for the AAA was lower than Monte Carlo and the depth of maximum dose for the phantom was smaller than that for the water phantom. From Monte Carlo results, there was a decrease of joint dose as its width increased. This reflected the smaller the joint width, the more the bone scatter contributed to the depth dose. Moreover, the joint dose was found slightly decreased with an increase of beam angle. Conclusion: The AAA could not handle variations of joint dose well with changes of joint width and beam angle based on our finger joint phantom. Monte Carlo results showed that the joint dose decreased with increase of joint width and beam angle. This dosimetry comparison should be useful to radiation staff in radiotherapy related to small bone joint.« less

  2. Incorporating structural analysis in a quantum dot Monte-Carlo model

    NASA Astrophysics Data System (ADS)

    Butler, I. M. E.; Li, Wei; Sobhani, S. A.; Babazadeh, N.; Ross, I. M.; Nishi, K.; Takemasa, K.; Sugawara, M.; Peyvast, Negin; Childs, D. T. D.; Hogg, R. A.

    2018-02-01

    We simulate the shape of the density of states (DoS) of the quantum dot (QD) ensemble based upon size information provided by high angle annular dark field scanning transmission electron microscopy (HAADF STEM). We discuss how the capability to determined the QD DoS from micro-structural data allows a MonteCarlo model to be developed to accurately describe the QD gain and spontaneous emission spectra. The QD DoS shape is then studied, with recommendations made via the effect of removing, and enhancing this size inhomogeneity on various QD based devices is explored.

  3. Monte Carlo simulation of a compact microbeam radiotherapy system based on carbon nanotube field emission technology.

    PubMed

    Schreiber, Eric C; Chang, Sha X

    2012-08-01

    Microbeam radiation therapy (MRT) is an experimental radiotherapy technique that has shown potent antitumor effects with minimal damage to normal tissue in animal studies. This unique form of radiation is currently only produced in a few large synchrotron accelerator research facilities in the world. To promote widespread translational research on this promising treatment technology we have proposed and are in the initial development stages of a compact MRT system that is based on carbon nanotube field emission x-ray technology. We report on a Monte Carlo based feasibility study of the compact MRT system design. Monte Carlo calculations were performed using EGSnrc-based codes. The proposed small animal research MRT device design includes carbon nanotube cathodes shaped to match the corresponding MRT collimator apertures, a common reflection anode with filter, and a MRT collimator. Each collimator aperture is sized to deliver a beam width ranging from 30 to 200 μm at 18.6 cm source-to-axis distance. Design parameters studied with Monte Carlo include electron energy, cathode design, anode angle, filtration, and collimator design. Calculations were performed for single and multibeam configurations. Increasing the energy from 100 kVp to 160 kVp increased the photon fluence through the collimator by a factor of 1.7. Both energies produced a largely uniform fluence along the long dimension of the microbeam, with 5% decreases in intensity near the edges. The isocentric dose rate for 160 kVp was calculated to be 700 Gy∕min∕A in the center of a 3 cm diameter target. Scatter contributions resulting from collimator size were found to produce only small (<7%) changes in the dose rate for field widths greater than 50 μm. Dose vs depth was weakly dependent on filtration material. The peak-to-valley ratio varied from 10 to 100 as the separation between adjacent microbeams varies from 150 to 1000 μm. Monte Carlo simulations demonstrate that the proposed compact MRT system design is capable of delivering a sufficient dose rate and peak-to-valley ratio for small animal MRT studies.

  4. Monte Carlo Simulations of Radiative and Neutrino Transport under Astrophysical Conditions

    NASA Astrophysics Data System (ADS)

    Krivosheyev, Yu. M.; Bisnovatyi-Kogan, G. S.

    2018-05-01

    Monte Carlo simulations are utilized to model radiative and neutrino transfer in astrophysics. An algorithm that can be used to study radiative transport in astrophysical plasma based on simulations of photon trajectories in a medium is described. Formation of the hard X-ray spectrum of the Galactic microquasar SS 433 is considered in detail as an example. Specific requirements for applying such simulations to neutrino transport in a densemedium and algorithmic differences compared to its application to photon transport are discussed.

  5. Quantum annealing of the traveling-salesman problem.

    PubMed

    Martonák, Roman; Santoro, Giuseppe E; Tosatti, Erio

    2004-11-01

    We propose a path-integral Monte Carlo quantum annealing scheme for the symmetric traveling-salesman problem, based on a highly constrained Ising-like representation, and we compare its performance against standard thermal simulated annealing. The Monte Carlo moves implemented are standard, and consist in restructuring a tour by exchanging two links (two-opt moves). The quantum annealing scheme, even with a drastically simple form of kinetic energy, appears definitely superior to the classical one, when tested on a 1002-city instance of the standard TSPLIB.

  6. Diagnosing Undersampling Biases in Monte Carlo Eigenvalue and Flux Tally Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M.; Rearden, Bradley T.; Marshall, William J.

    2017-02-08

    Here, this study focuses on understanding the phenomena in Monte Carlo simulations known as undersampling, in which Monte Carlo tally estimates may not encounter a sufficient number of particles during each generation to obtain unbiased tally estimates. Steady-state Monte Carlo simulations were performed using the KENO Monte Carlo tools within the SCALE code system for models of several burnup credit applications with varying degrees of spatial and isotopic complexities, and the incidence and impact of undersampling on eigenvalue and flux estimates were examined. Using an inadequate number of particle histories in each generation was found to produce a maximum bias of ~100 pcm in eigenvalue estimates and biases that exceeded 10% in fuel pin flux tally estimates. Having quantified the potential magnitude of undersampling biases in eigenvalue and flux tally estimates in these systems, this study then investigated whether Markov Chain Monte Carlo convergence metrics could be integrated into Monte Carlo simulations to predict the onset and magnitude of undersampling biases. Five potential metrics for identifying undersampling biases were implemented in the SCALE code system and evaluated for their ability to predict undersampling biases by comparing the test metric scores with the observed undersampling biases. Finally, of the five convergence metrics that were investigated, three (the Heidelberger-Welch relative half-width, the Gelman-Rubin more » $$\\hat{R}_c$$ diagnostic, and tally entropy) showed the potential to accurately predict the behavior of undersampling biases in the responses examined.« less

  7. Group sequential monitoring based on the weighted log-rank test statistic with the Fleming-Harrington class of weights in cancer vaccine studies.

    PubMed

    Hasegawa, Takahiro

    2016-09-01

    In recent years, immunological science has evolved, and cancer vaccines are now approved and available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected and is actually observed in drug approval studies. Accordingly, we propose the evaluation of survival endpoints by weighted log-rank tests with the Fleming-Harrington class of weights. We consider group sequential monitoring, which allows early efficacy stopping, and determine a semiparametric information fraction for the Fleming-Harrington family of weights, which is necessary for the error spending function. Moreover, we give a flexible survival model in cancer vaccine studies that considers not only the delayed treatment effect but also the long-term survivors. In a Monte Carlo simulation study, we illustrate that when the primary analysis is a weighted log-rank test emphasizing the late differences, the proposed information fraction can be a useful alternative to the surrogate information fraction, which is proportional to the number of events. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. SU-E-T-561: Monte Carlo-Based Organ Dose Reconstruction Using Pre-Contoured Human Model for Hodgkins Lymphoma Patients Treated by Cobalt-60 External Beam Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, J; Pelletier, C; Lee, C

    Purpose: Organ doses for the Hodgkin’s lymphoma patients treated with cobalt-60 radiation were estimated using an anthropomorphic model and Monte Carlo modeling. Methods: A cobalt-60 treatment unit modeled in the BEAMnrc Monte Carlo code was used to produce phase space data. The Monte Carlo simulation was verified with percent depth dose measurement in water at various field sizes. Radiation transport through the lung blocks were modeled by adjusting the weights of phase space data. We imported a precontoured adult female hybrid model and generated a treatment plan. The adjusted phase space data and the human model were imported to themore » XVMC Monte Carlo code for dose calculation. The organ mean doses were estimated and dose volume histograms were plotted. Results: The percent depth dose agreement between measurement and calculation in water phantom was within 2% for all field sizes. The mean organ doses of heart, left breast, right breast, and spleen for the selected case were 44.3, 24.1, 14.6 and 3.4 Gy, respectively with the midline prescription dose of 40.0 Gy. Conclusion: Organ doses were estimated for the patient group whose threedimensional images are not available. This development may open the door to more accurate dose reconstruction and estimates of uncertainties in secondary cancer risk for Hodgkin’s lymphoma patients. This work was partially supported by the intramural research program of the National Institutes of Health, National Cancer Institute, Division of Cancer Epidemiology and Genetics.« less

  9. Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data

    NASA Astrophysics Data System (ADS)

    Glüsenkamp, Thorsten

    2018-06-01

    Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.

  10. Prognostics of lithium-ion batteries based on Dempster-Shafer theory and the Bayesian Monte Carlo method

    NASA Astrophysics Data System (ADS)

    He, Wei; Williard, Nicholas; Osterman, Michael; Pecht, Michael

    A new method for state of health (SOH) and remaining useful life (RUL) estimations for lithium-ion batteries using Dempster-Shafer theory (DST) and the Bayesian Monte Carlo (BMC) method is proposed. In this work, an empirical model based on the physical degradation behavior of lithium-ion batteries is developed. Model parameters are initialized by combining sets of training data based on DST. BMC is then used to update the model parameters and predict the RUL based on available data through battery capacity monitoring. As more data become available, the accuracy of the model in predicting RUL improves. Two case studies demonstrating this approach are presented.

  11. Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program

    ERIC Educational Resources Information Center

    Serrano, Agostinho; Santos, Flavia M. T.; Greca, Ileana M.

    2004-01-01

    The use of molecular dynamics and Monte Carlo methods has provided efficient means to stimulate the behavior of molecular liquids and solutions. A Monte Carlo simulation program is used to compute the structure of liquid water and of water as a solvent to Na(super +), Cl(super -), and Ar on a personal computer to show that it is easily feasible to…

  12. Considerations of MCNP Monte Carlo code to be used as a radiotherapy treatment planning tool.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Verdu, G; Santos, A

    2005-01-01

    The present work has simulated the photon and electron transport in a Theratron 780® (MDS Nordion)60Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle). This project explains mainly the different methodologies carried out to speedup calculations in order to apply this code efficiently in radiotherapy treatment planning.

  13. Hybrid Monte Carlo/deterministic methods for radiation shielding problems

    NASA Astrophysics Data System (ADS)

    Becker, Troy L.

    For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods can be used to achieve user-specified Monte Carlo distributions. Overall, the Transform approach performed more efficiently than the weight window methods, but it performed much more efficiently for source-detector problems than for global problems.

  14. An Unsplit Monte-Carlo solver for the resolution of the linear Boltzmann equation coupled to (stiff) Bateman equations

    NASA Astrophysics Data System (ADS)

    Bernede, Adrien; Poëtte, Gaël

    2018-02-01

    In this paper, we are interested in the resolution of the time-dependent problem of particle transport in a medium whose composition evolves with time due to interactions. As a constraint, we want to use of Monte-Carlo (MC) scheme for the transport phase. A common resolution strategy consists in a splitting between the MC/transport phase and the time discretization scheme/medium evolution phase. After going over and illustrating the main drawbacks of split solvers in a simplified configuration (monokinetic, scalar Bateman problem), we build a new Unsplit MC (UMC) solver improving the accuracy of the solutions, avoiding numerical instabilities, and less sensitive to time discretization. The new solver is essentially based on a Monte Carlo scheme with time dependent cross sections implying the on-the-fly resolution of a reduced model for each MC particle describing the time evolution of the matter along their flight path.

  15. Path-integral Monte Carlo method for Rényi entanglement entropies.

    PubMed

    Herdman, C M; Inglis, Stephen; Roy, P-N; Melko, R G; Del Maestro, A

    2014-07-01

    We introduce a quantum Monte Carlo algorithm to measure the Rényi entanglement entropies in systems of interacting bosons in the continuum. This approach is based on a path-integral ground state method that can be applied to interacting itinerant bosons in any spatial dimension with direct relevance to experimental systems of quantum fluids. We demonstrate how it may be used to compute spatial mode entanglement, particle partitioned entanglement, and the entanglement of particles, providing insights into quantum correlations generated by fluctuations, indistinguishability, and interactions. We present proof-of-principle calculations and benchmark against an exactly soluble model of interacting bosons in one spatial dimension. As this algorithm retains the fundamental polynomial scaling of quantum Monte Carlo when applied to sign-problem-free models, future applications should allow for the study of entanglement entropy in large-scale many-body systems of interacting bosons.

  16. Development of a new multi-modal Monte-Carlo radiotherapy planning system.

    PubMed

    Kumada, H; Nakamura, T; Komeda, M; Matsumura, A

    2009-07-01

    A new multi-modal Monte-Carlo radiotherapy planning system (developing code: JCDS-FX) is under development at Japan Atomic Energy Agency. This system builds on fundamental technologies of JCDS applied to actual boron neutron capture therapy (BNCT) trials in JRR-4. One of features of the JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multi-purpose particle Monte-Carlo transport code. Hence application of PHITS enables to evaluate total doses given to a patient by a combined modality therapy. Moreover, JCDS-FX with PHITS can be used for the study of accelerator based BNCT. To verify calculation accuracy of the JCDS-FX, dose evaluations for neutron irradiation of a cylindrical water phantom and for an actual clinical trial were performed, then the results were compared with calculations by JCDS with MCNP. The verification results demonstrated that JCDS-FX is applicable to BNCT treatment planning in practical use.

  17. Comparison of deterministic and stochastic methods for time-dependent Wigner simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Sihong, E-mail: sihong@math.pku.edu.cn; Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg

    2015-11-01

    Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution ofmore » a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.« less

  18. Generic expansion of the Jastrow correlation factor in polynomials satisfying symmetry and cusp conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lüchow, Arne, E-mail: luechow@rwth-aachen.de; Jülich Aachen Research Alliance; Sturm, Alexander

    2015-02-28

    Jastrow correlation factors play an important role in quantum Monte Carlo calculations. Together with an orbital based antisymmetric function, they allow the construction of highly accurate correlation wave functions. In this paper, a generic expansion of the Jastrow correlation function in terms of polynomials that satisfy both the electron exchange symmetry constraint and the cusp conditions is presented. In particular, an expansion of the three-body electron-electron-nucleus contribution in terms of cuspless homogeneous symmetric polynomials is proposed. The polynomials can be expressed in fairly arbitrary scaling function allowing a generic implementation of the Jastrow factor. It is demonstrated with a fewmore » examples that the new Jastrow factor achieves 85%–90% of the total correlation energy in a variational quantum Monte Carlo calculation and more than 90% of the diffusion Monte Carlo correlation energy.« less

  19. Investigation of electronic and magnetic properties of FeS: First principle and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Bouachraoui, Rachid; El Hachimi, Abdel Ghafour; Ziat, Younes; Bahmad, Lahoucine; Tahiri, Najim

    2018-06-01

    Electronic and magnetic properties of hexagonal Iron (II) Sulfide (hexagonal FeS) have been investigated by combining the Density functional theory (DFT) and Monte Carlo simulations (MCS). This compound is constituted by magnetic hexagonal lattice occupied by Fe2+ with spin state (S = 2). Based on ab initio method, we calculated the exchange coupling JFe-Fe between two magnetic atoms Fe-Fe in different directions. Also phase transitions, magnetic stability and magnetizations have been investigated in the framework of Monte Carlo simulations. Within this method, a second phase transition is observed at the Néel temperature TN = 450 K. This finding in good agreement with the reported data in the literature. The effect of the applied different parameters showed how can these parameters affect the critical temperature of this system. Moreover, we studied the density of states and found that the hexagonal FeS will be a promoting material for spintronic applications.

  20. Methodology of full-core Monte Carlo calculations with leakage parameter evaluations for benchmark critical experiment analysis

    NASA Astrophysics Data System (ADS)

    Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.

    1997-02-01

    The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.

  1. Electrosorption of a modified electrode in the vicinity of phase transition: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Gavilán Arriazu, E. M.; Pinto, O. A.

    2018-03-01

    We present a Monte Carlo study for the electrosorption of an electroactive species on a modified electrode. The surface of the electrode is modified by the irreversible adsorption of a non-electroactive species which is able to block a percentage of the adsorption sites. This generates an electrode with variable connectivity sites. A second species, electroactive in this case, is adsorbed in surface vacancies and can interact repulsively with itself. In particular, we are interested in the analysis of the effect of the non-electroactive species near of critical regime, where the c(2 × 2) structure is formed. Lattice-gas models and Monte Carlo simulations in the Gran Canonical Ensemble are used. The analysis conducted is based on the study of voltammograms, order parameters, isotherms, configurational entropy per site, at several values of energies and coverage degrees of the non-electroactive species.

  2. High-Precision Monte Carlo Simulation of the Ising Models on the Penrose Lattice and the Dual Penrose Lattice

    NASA Astrophysics Data System (ADS)

    Komura, Yukihiro; Okabe, Yutaka

    2016-04-01

    We study the Ising models on the Penrose lattice and the dual Penrose lattice by means of the high-precision Monte Carlo simulation. Simulating systems up to the total system size N = 20633239, we estimate the critical temperatures on those lattices with high accuracy. For high-speed calculation, we use the generalized method of the single-GPU-based computation for the Swendsen-Wang multi-cluster algorithm of Monte Carlo simulation. As a result, we estimate the critical temperature on the Penrose lattice as Tc/J = 2.39781 ± 0.00005 and that of the dual Penrose lattice as Tc*/J = 2.14987 ± 0.00005. Moreover, we definitely confirm the duality relation between the critical temperatures on the dual pair of quasilattices with a high degree of accuracy, sinh (2J/Tc)sinh (2J/Tc*) = 1.00000 ± 0.00004.

  3. Multilevel Monte Carlo for two phase flow and Buckley–Leverett transport in random heterogeneous porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller, Florian, E-mail: florian.mueller@sam.math.ethz.ch; Jenny, Patrick, E-mail: jenny@ifd.mavt.ethz.ch; Meyer, Daniel W., E-mail: meyerda@ethz.ch

    2013-10-01

    Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared tomore » MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.« less

  4. Position, Orientation and Velocity Detection of Unmanned Underwater Vehicles (UUVs) Using an Optical Detector Array

    PubMed Central

    Pe’eri, Shachak; Thein, May-Win; Rzhanov, Yuri; Celikkol, Barbaros; Swift, M. Robinson

    2017-01-01

    This paper presents a proof-of-concept optical detector array sensor system to be used in Unmanned Underwater Vehicle (UUV) navigation. The performance of the developed optical detector array was evaluated for its capability to estimate the position, orientation and forward velocity of UUVs with respect to a light source fixed in underwater. The evaluations were conducted through Monte Carlo simulations and empirical tests under a variety of motion configurations. Monte Carlo simulations also evaluated the system total propagated uncertainty (TPU) by taking into account variations in the water column turbidity, temperature and hardware noise that may degrade the system performance. Empirical tests were conducted to estimate UUV position and velocity during its navigation to a light beacon. Monte Carlo simulation and empirical results support the use of the detector array system for optics based position feedback for UUV positioning applications. PMID:28758936

  5. McSnow: A Monte-Carlo Particle Model for Riming and Aggregation of Ice Particles in a Multidimensional Microphysical Phase Space

    NASA Astrophysics Data System (ADS)

    Brdar, S.; Seifert, A.

    2018-01-01

    We present a novel Monte-Carlo ice microphysics model, McSnow, to simulate the evolution of ice particles due to deposition, aggregation, riming, and sedimentation. The model is an application and extension of the super-droplet method of Shima et al. (2009) to the more complex problem of rimed ice particles and aggregates. For each individual super-particle, the ice mass, rime mass, rime volume, and the number of monomers are predicted establishing a four-dimensional particle-size distribution. The sensitivity of the model to various assumptions is discussed based on box model and one-dimensional simulations. We show that the Monte-Carlo method provides a feasible approach to tackle this high-dimensional problem. The largest uncertainty seems to be related to the treatment of the riming processes. This calls for additional field and laboratory measurements of partially rimed snowflakes.

  6. Simulation of radiation damping in rings, using stepwise ray-tracing methods

    DOE PAGES

    Meot, F.

    2015-06-26

    The ray-tracing code Zgoubi computes particle trajectories in arbitrary magnetic and/or electric field maps or analytical field models. It includes a built-in fitting procedure, spin tracking many Monte Carlo processes. The accuracy of the integration method makes it an efficient tool for multi-turn tracking in periodic machines. Energy loss by synchrotron radiation, based on Monte Carlo techniques, had been introduced in Zgoubi in the early 2000s for studies regarding the linear collider beam delivery system. However, only recently has this Monte Carlo tool been used for systematic beam dynamics and spin diffusion studies in rings, including eRHIC electron-ion collider projectmore » at the Brookhaven National Laboratory. Some beam dynamics aspects of this recent use of Zgoubi capabilities, including considerations of accuracy as well as further benchmarking in the presence of synchrotron radiation in rings, are reported here.« less

  7. The difference between LSMC and replicating portfolio in insurance liability modeling.

    PubMed

    Pelsser, Antoon; Schweizer, Janina

    2016-01-01

    Solvency II requires insurers to calculate the 1-year value at risk of their balance sheet. This involves the valuation of the balance sheet in 1 year's time. As for insurance liabilities, closed-form solutions to their value are generally not available, insurers turn to estimation procedures. While pure Monte Carlo simulation set-ups are theoretically sound, they are often infeasible in practice. Therefore, approximation methods are exploited. Among these, least squares Monte Carlo (LSMC) and portfolio replication are prominent and widely applied in practice. In this paper, we show that, while both are variants of regression-based Monte Carlo methods, they differ in one significant aspect. While the replicating portfolio approach only contains an approximation error, which converges to zero in the limit, in LSMC a projection error is additionally present, which cannot be eliminated. It is revealed that the replicating portfolio technique enjoys numerous advantages and is therefore an attractive model choice.

  8. Comparison of screening-level and Monte Carlo approaches for wildlife food web exposure modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pastorok, R.; Butcher, M.; LaTier, A.

    1995-12-31

    The implications of using quantitative uncertainty analysis (e.g., Monte Carlo) and site-specific tissue residue data for wildlife exposure modeling were examined with data on trace elements at the Clark Fork River Superfund Site. Exposure of white-tailed deer, red fox, and American kestrel was evaluated using three approaches. First, a screening-level exposure model was based on conservative estimates of exposure parameters, including estimates of dietary residues derived from bioconcentration factors (BCFs) and soil chemistry. A second model without Monte Carlo was based on site-specific data for tissue residues of trace elements (As, Cd, Cu, Pb, Zn) in key dietary species andmore » plausible assumptions for habitat spatial segmentation and other exposure parameters. Dietary species sampled included dominant grasses (tufted hairgrass and redtop), willows, alfalfa, barley, invertebrates (grasshoppers, spiders, and beetles), and deer mice. Third, the Monte Carlo analysis was based on the site-specific residue data and assumed or estimated distributions for exposure parameters. Substantial uncertainties are associated with several exposure parameters, especially BCFS, such that exposure and risk may be greatly overestimated in screening-level approaches. The results of the three approaches are compared with respect to realism, practicality, and data gaps. Collection of site-specific data on trace elements concentrations in plants and animals eaten by the target wildlife receptors is a cost-effective way to obtain realistic estimates of exposure. Implications of the results for exposure and risk estimates are discussed relative to use of wildlife exposure modeling and evaluation of remedial actions at Superfund sites.« less

  9. Derivative-Free Estimation of the Score Vector and Observed Information Matrix with Application to State-Space Models

    DTIC Science & Technology

    2015-07-14

    2008). Sequential Monte Carlo smoothing with applica- tion to parameter estimation in non-linear state space models. Bernoulli , 14, 155-179. [22] Parikh...1BcΣ(θ?,δ)(Θ) ] = o ( τk ) for all k ∈ N. (45) The other integral is over the ball BΣ(θ?, δ), i.e. close to θ?; hence we perform a Taylor expansion of...1] R3 (θ, θ?) = ∑ |α|=4 ∂αϕ (θ? + cθ (θ − θ?)) (θ − θ?)α α! . 26 We now use the symmetry of the normal distribution N ( θ?, τ2Σ ) on the ball BΣ(θ

  10. Joint penalized-likelihood reconstruction of time-activity curves and regions-of-interest from projection data in brain PET

    NASA Astrophysics Data System (ADS)

    Krestyannikov, E.; Tohka, J.; Ruotsalainen, U.

    2008-06-01

    This paper presents a novel statistical approach for joint estimation of regions-of-interest (ROIs) and the corresponding time-activity curves (TACs) from dynamic positron emission tomography (PET) brain projection data. It is based on optimizing the joint objective function that consists of a data log-likelihood term and two penalty terms reflecting the available a priori information about the human brain anatomy. The developed local optimization strategy iteratively updates both the ROI and TAC parameters and is guaranteed to monotonically increase the objective function. The quantitative evaluation of the algorithm is performed with numerically and Monte Carlo-simulated dynamic PET brain data of the 11C-Raclopride and 18F-FDG tracers. The results demonstrate that the method outperforms the existing sequential ROI quantification approaches in terms of accuracy, and can noticeably reduce the errors in TACs arising due to the finite spatial resolution and ROI delineation.

  11. A Looping-Based Model for Quenching Repression

    PubMed Central

    Pollak, Yaroslav; Goldberg, Sarah; Amit, Roee

    2017-01-01

    We model the regulatory role of proteins bound to looped DNA using a simulation in which dsDNA is represented as a self-avoiding chain, and proteins as spherical protrusions. We simulate long self-avoiding chains using a sequential importance sampling Monte-Carlo algorithm, and compute the probabilities for chain looping with and without a protrusion. We find that a protrusion near one of the chain’s termini reduces the probability of looping, even for chains much longer than the protrusion–chain-terminus distance. This effect increases with protrusion size, and decreases with protrusion-terminus distance. The reduced probability of looping can be explained via an eclipse-like model, which provides a novel inhibitory mechanism. We test the eclipse model on two possible transcription-factor occupancy states of the D. melanogaster eve 3/7 enhancer, and show that it provides a possible explanation for the experimentally-observed eve stripe 3 and 7 expression patterns. PMID:28085884

  12. Method and system for detecting polygon boundaries of structures in images as particle tracks through fields of corners and pixel gradients

    DOEpatents

    Paglieroni, David W [Pleasanton, CA; Manay, Siddharth [Livermore, CA

    2011-12-20

    A stochastic method and system for detecting polygon structures in images, by detecting a set of best matching corners of predetermined acuteness .alpha. of a polygon model from a set of similarity scores based on GDM features of corners, and tracking polygon boundaries as particle tracks using a sequential Monte Carlo approach. The tracking involves initializing polygon boundary tracking by selecting pairs of corners from the set of best matching corners to define a first side of a corresponding polygon boundary; tracking all intermediate sides of the polygon boundaries using a particle filter, and terminating polygon boundary tracking by determining the last side of the tracked polygon boundaries to close the polygon boundaries. The particle tracks are then blended to determine polygon matches, which may be made available, such as to a user, for ranking and inspection.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J.C.; Leahy, R.M.

    A new method for source localization is described that is based on a modification of the well known multiple signal classification (MUSIC) algorithm. In classical MUSIC, the array manifold vector is projected onto an estimate of the signal subspace, but errors in the estimate can make location of multiple sources difficult. Recursively applied and projected (RAP) MUSIC uses each successively located source to form an intermediate array gain matrix, and projects both the array manifold and the signal subspace estimate into its orthogonal complement. The MUSIC projection is then performed in this reduced subspace. Using the metric of principal angles,more » the authors describe a general form of the RAP-MUSIC algorithm for the case of diversely polarized sources. Through a uniform linear array simulation, the authors demonstrate the improved Monte Carlo performance of RAP-MUSIC relative to MUSIC and two other sequential subspace methods, S and IES-MUSIC.« less

  14. Monte Carlo method for calculating the radiation skyshine produced by electron accelerators

    NASA Astrophysics Data System (ADS)

    Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin

    2005-06-01

    Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.

  15. Self-learning Monte Carlo method

    DOE PAGES

    Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...

    2017-01-04

    Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less

  16. Random Numbers and Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  17. Uncertainties in models of tropospheric ozone based on Monte Carlo analysis: Tropospheric ozone burdens, atmospheric lifetimes and surface distributions

    NASA Astrophysics Data System (ADS)

    Derwent, Richard G.; Parrish, David D.; Galbally, Ian E.; Stevenson, David S.; Doherty, Ruth M.; Naik, Vaishali; Young, Paul J.

    2018-05-01

    Recognising that global tropospheric ozone models have many uncertain input parameters, an attempt has been made to employ Monte Carlo sampling to quantify the uncertainties in model output that arise from global tropospheric ozone precursor emissions and from ozone production and destruction in a global Lagrangian chemistry-transport model. Ninety eight quasi-randomly Monte Carlo sampled model runs were completed and the uncertainties were quantified in tropospheric burdens and lifetimes of ozone, carbon monoxide and methane, together with the surface distribution and seasonal cycle in ozone. The results have shown a satisfactory degree of convergence and provide a first estimate of the likely uncertainties in tropospheric ozone model outputs. There are likely to be diminishing returns in carrying out many more Monte Carlo runs in order to refine further these outputs. Uncertainties due to model formulation were separately addressed using the results from 14 Atmospheric Chemistry Coupled Climate Model Intercomparison Project (ACCMIP) chemistry-climate models. The 95% confidence ranges surrounding the ACCMIP model burdens and lifetimes for ozone, carbon monoxide and methane were somewhat smaller than for the Monte Carlo estimates. This reflected the situation where the ACCMIP models used harmonised emissions data and differed only in their meteorological data and model formulations whereas a conscious effort was made to describe the uncertainties in the ozone precursor emissions and in the kinetic and photochemical data in the Monte Carlo runs. Attention was focussed on the model predictions of the ozone seasonal cycles at three marine boundary layer stations: Mace Head, Ireland, Trinidad Head, California and Cape Grim, Tasmania. Despite comprehensively addressing the uncertainties due to global emissions and ozone sources and sinks, none of the Monte Carlo runs were able to generate seasonal cycles that matched the observations at all three MBL stations. Although the observed seasonal cycles were found to fall within the confidence limits of the ACCMIP members, this was because the model seasonal cycles spanned extremely wide ranges and there was no single ACCMIP member that performed best for each station. Further work is required to examine the parameterisation of convective mixing in the models to see if this erodes the isolation of the marine boundary layer from the free troposphere and thus hides the models' real ability to reproduce ozone seasonal cycles over marine stations.

  18. Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dirgayussa, I Gde Eka, E-mail: ekadirgayussa@gmail.com; Yani, Sitti; Haryanto, Freddy, E-mail: freddy@fi.itb.ac.id

    2015-09-30

    Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDPmore » and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good criteria of dose difference in PDD and dose profiles were achieve using incident electron energy 6.4 MeV.« less

  19. Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.

    PubMed

    Yuan, J; Moses, G A; McKenty, P W

    2005-10-01

    A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.

  20. QMCPACK : an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids

    DOE PAGES

    Kim, Jeongnim; Baczewski, Andrew T.; Beaudet, Todd D.; ...

    2018-04-19

    QMCPACK is an open source quantum Monte Carlo package for ab-initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wave functions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performancemore » computing architectures, including multicore central processing unit (CPU) and graphical processing unit (GPU) systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://www.qmcpack.org.« less

  1. Monte Carlo simulation for kinetic chemotaxis model: An application to the traveling population wave

    NASA Astrophysics Data System (ADS)

    Yasuda, Shugo

    2017-02-01

    A Monte Carlo simulation of chemotactic bacteria is developed on the basis of the kinetic model and is applied to a one-dimensional traveling population wave in a microchannel. In this simulation, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to calculate the macroscopic transport of the chemical cues in the environment. The simulation method can successfully reproduce the traveling population wave of bacteria that was observed experimentally and reveal the microscopic dynamics of bacterium coupled with the macroscopic transports of the chemical cues and bacteria population density. The results obtained by the Monte Carlo method are also compared with the asymptotic solution derived from the kinetic chemotaxis equation in the continuum limit, where the Knudsen number, which is defined by the ratio of the mean free path of bacterium to the characteristic length of the system, vanishes. The validity of the Monte Carlo method in the asymptotic behaviors for small Knudsen numbers is numerically verified.

  2. QMCPACK : an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jeongnim; Baczewski, Andrew T.; Beaudet, Todd D.

    QMCPACK is an open source quantum Monte Carlo package for ab-initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wave functions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performancemore » computing architectures, including multicore central processing unit (CPU) and graphical processing unit (GPU) systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://www.qmcpack.org.« less

  3. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation

    NASA Astrophysics Data System (ADS)

    Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W.

    2015-02-01

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.

  4. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    PubMed

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  5. SU-E-T-467: Implementation of Monte Carlo Dose Calculation for a Multileaf Collimator Equipped Robotic Radiotherapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, JS; Fan, J; Ma, C-M

    Purpose: To improve the treatment efficiency and capabilities for full-body treatment, a robotic radiosurgery system has equipped with a multileaf collimator (MLC) to extend its accuracy and precision to radiation therapy. To model the MLC and include it in the Monte Carlo patient dose calculation is the goal of this work. Methods: The radiation source and the MLC were carefully modeled to consider the effects of the source size, collimator scattering, leaf transmission and leaf end shape. A source model was built based on the output factors, percentage depth dose curves and lateral dose profiles measured in a water phantom.more » MLC leaf shape, leaf end design and leaf tilt for minimizing the interleaf leakage and their effects on beam fluence and energy spectrum were all considered in the calculation. Transmission/leakage was added to the fluence based on the transmission factors of the leaf and the leaf end. The transmitted photon energy was tuned to consider the beam hardening effects. The calculated results with the Monte Carlo implementation was compared with measurements in homogeneous water phantom and inhomogeneous phantoms with slab lung or bone material for 4 square fields and 9 irregularly shaped fields. Results: The calculated output factors are compared with the measured ones and the difference is within 1% for different field sizes. The calculated dose distributions in the phantoms show good agreement with measurements using diode detector and films. The dose difference is within 2% inside the field and the distance to agreement is within 2mm in the penumbra region. The gamma passing rate is more than 95% with 2%/2mm criteria for all the test cases. Conclusion: Implementation of Monte Carlo dose calculation for a MLC equipped robotic radiosurgery system is completed successfully. The accuracy of Monte Carlo dose calculation with MLC is clinically acceptable. This work was supported by Accuray Inc.« less

  6. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.

  7. PENGEOM-A general-purpose geometry package for Monte Carlo simulation of radiation transport in material systems defined by quadric surfaces

    NASA Astrophysics Data System (ADS)

    Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc

    2016-02-01

    The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.

  8. Benchmarking PARTISN with Analog Monte Carlo: Moments of the Neutron Number and the Cumulative Fission Number Probability Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Rourke, Patrick Francis

    The purpose of this report is to provide the reader with an understanding of how a Monte Carlo neutron transport code was written, developed, and evolved to calculate the probability distribution functions (PDFs) and their moments for the neutron number at a final time as well as the cumulative fission number, along with introducing several basic Monte Carlo concepts.

  9. Sample Size and Power Estimates for a Confirmatory Factor Analytic Model in Exercise and Sport: A Monte Carlo Approach

    ERIC Educational Resources Information Center

    Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying

    2011-01-01

    Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…

  10. Perturbative two- and three-loop coefficients from large β Monte Carlo

    NASA Astrophysics Data System (ADS)

    Lepage, G. P.; Mackenzie, P. B.; Shakespeare, N. H.; Trottier, H. D.

    Perturbative coefficients for Wilson loops and the static quark self-energy are extracted from Monte Carlo simulations at large β on finite volumes, where all the lattice momenta are large. The Monte Carlo results are in excellent agreement with perturbation theory through second order. New results for third order coefficients are reported. Twisted boundary conditions are used to eliminate zero modes and to suppress Z3 tunneling.

  11. Perturbative two- and three-loop coefficients from large b Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G.P. Lepage; P.B. Mackenzie; N.H. Shakespeare

    1999-10-18

    Perturbative coefficients for Wilson loops and the static quark self-energy are extracted from Monte Carlo simulations at large {beta} on finite volumes, where all the lattice momenta are large. The Monte Carlo results are in excellent agreement with perturbation theory through second order. New results for third order coefficients are reported. Twisted boundary conditions are used to eliminate zero modes and to suppress Z{sub 3} tunneling.

  12. Development of a multi-modal Monte-Carlo radiation treatment planning system combined with PHITS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumada, Hiroaki; Nakamura, Takemi; Komeda, Masao

    A new multi-modal Monte-Carlo radiation treatment planning system is under development at Japan Atomic Energy Agency. This system (developing code: JCDS-FX) builds on fundamental technologies of JCDS. JCDS was developed by JAEA to perform treatment planning of boron neutron capture therapy (BNCT) which is being conducted at JRR-4 in JAEA. JCDS has many advantages based on practical accomplishments for actual clinical trials of BNCT at JRR-4, the advantages have been taken over to JCDS-FX. One of the features of JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multipurpose particle Monte-Carlo transport code, thus applicationmore » of PHITS enables to evaluate doses for not only BNCT but also several radiotherapies like proton therapy. To verify calculation accuracy of JCDS-FX with PHITS for BNCT, treatment planning of an actual BNCT conducted at JRR-4 was performed retrospectively. The verification results demonstrated the new system was applicable to BNCT clinical trials in practical use. In framework of R and D for laser-driven proton therapy, we begin study for application of JCDS-FX combined with PHITS to proton therapy in addition to BNCT. Several features and performances of the new multimodal Monte-Carlo radiotherapy planning system are presented.« less

  13. A smart Monte Carlo procedure for production costing and uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, C.; Stremel, J.

    1996-11-01

    Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge ofmore » the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined.« less

  14. Understanding quantum tunneling using diffusion Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.

    2018-03-01

    In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.

  15. Non-vascular interventional procedures: effective dose to patient and equivalent dose to abdominal organs by means of DICOM images and Monte Carlo simulation.

    PubMed

    Longo, Mariaconcetta; Marchioni, Chiara; Insero, Teresa; Donnarumma, Raffaella; D'Adamo, Alessandro; Lucatelli, Pierleone; Fanelli, Fabrizio; Salvatori, Filippo Maria; Cannavale, Alessandro; Di Castro, Elisabetta

    2016-03-01

    This study evaluates X-ray exposure in patient undergoing abdominal extra-vascular interventional procedures by means of Digital Imaging and COmmunications in Medicine (DICOM) image headers and Monte Carlo simulation. The main aim was to assess the effective and equivalent doses, under the hypothesis of their correlation with the dose area product (DAP) measured during each examination. This allows to collect dosimetric information about each patient and to evaluate associated risks without resorting to in vivo dosimetry. The dose calculation was performed in 79 procedures through the Monte Carlo simulator PCXMC (A PC-based Monte Carlo program for calculating patient doses in medical X-ray examinations), by using the real geometrical and dosimetric irradiation conditions, automatically extracted from DICOM headers. The DAP measurements were also validated by using thermoluminescent dosemeters on an anthropomorphic phantom. The expected linear correlation between effective doses and DAP was confirmed with an R(2) of 0.974. Moreover, in order to easily calculate patient doses, conversion coefficients that relate equivalent doses to measurable quantities, such as DAP, were obtained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Prediction of betavoltaic battery output parameters based on SEM measurements and Monte Carlo simulation.

    PubMed

    Yakimov, Eugene B

    2016-06-01

    An approach for a prediction of (63)Ni-based betavoltaic battery output parameters is described. It consists of multilayer Monte Carlo simulation to obtain the depth dependence of excess carrier generation rate inside the semiconductor converter, a determination of collection probability based on the electron beam induced current measurements, a calculation of current induced in the semiconductor converter by beta-radiation, and SEM measurements of output parameters using the calculated induced current value. Such approach allows to predict the betavoltaic battery parameters and optimize the converter design for any real semiconductor structure and any thickness and specific activity of beta-radiation source. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Cloud-based Monte Carlo modelling of BSSRDF for the rendering of human skin appearance (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.

    2016-03-01

    We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.

  18. A united event grand canonical Monte Carlo study of partially doped polyaniline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byshkin, M. S., E-mail: mbyshkin@unisa.it, E-mail: gmilano@unisa.it; Correa, A.; Buonocore, F.

    2013-12-28

    A Grand Canonical Monte Carlo scheme, based on united events combining protonation/deprotonation and insertion/deletion of HCl molecules is proposed for the generation of polyaniline structures at intermediate doping levels between 0% (PANI EB) and 100% (PANI ES). A procedure based on this scheme and subsequent structure relaxations using molecular dynamics is described and validated. Using the proposed scheme and the corresponding procedure, atomistic models of amorphous PANI-HCl structures were generated and studied at different doping levels. Density, structure factors, and solubility parameters were calculated. Their values agree well with available experimental data. The interactions of HCl with PANI have beenmore » studied and distribution of their energies has been analyzed. The procedure has also been extended to the generation of PANI models including adsorbed water and the effect of inclusion of water molecules on PANI properties has also been modeled and discussed. The protocol described here is general and the proposed United Event Grand Canonical Monte Carlo scheme can be easily extended to similar polymeric materials used in gas sensing and to other systems involving adsorption and chemical reactions steps.« less

  19. Simulated Performance of the Orbiting Wide-angle Light Collectors (OWL) Experiment

    NASA Technical Reports Server (NTRS)

    Krizmanic, J. F.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    The Orbiting Wide-angle Light collectors (OWL) experiment is in NASA's mid-term strategic plan and will stereoscopically image, from equatorial orbit, the air fluorescence signal generated by airshowers induced by the ultrahigh energy (E greater than few x 10(exp 19) eV) component of the cosmic radiation. The use of a space-based platform enables an extremely large event acceptance aperture and thus will allow a high statistics measurement of these rare events. Detailed Monte Carlo simulations are required to quantify the physics potential of the mission as well as optimize the instrumental parameters. This paper reports on the results of the GSFC Monte Carlo simulation for two different, OWL instrument baseline designs. These results indicate that, assuming a continuation of the cosmic ray spectrum (theta approximately E(exp -2.75), OWL could have an event rate of 4000 events/year with E greater than or equal to 10(exp 20) eV. Preliminary results, based upon these Monte Carlo simulations, indicate that events can be accurately reconstructed in the detector focal plane arrays for the OWL instrument baseline designs under consideration.

  20. Comparison of a layered slab and an atlas head model for Monte Carlo fitting of time-domain near-infrared spectroscopy data of the adult head

    PubMed Central

    Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.

    2014-01-01

    Abstract. Near-infrared spectroscopy (NIRS) estimations of the adult brain baseline optical properties based on a homogeneous model of the head are known to introduce significant contamination from extracerebral layers. More complex models have been proposed and occasionally applied to in vivo data, but their performances have never been characterized on realistic head structures. Here we implement a flexible fitting routine of time-domain NIRS data using graphics processing unit based Monte Carlo simulations. We compare the results for two different geometries: a two-layer slab with variable thickness of the first layer and a template atlas head registered to the subject’s head surface. We characterize the performance of the Monte Carlo approaches for fitting the optical properties from simulated time-resolved data of the adult head. We show that both geometries provide better results than the commonly used homogeneous model, and we quantify the improvement in terms of accuracy, linearity, and cross-talk from extracerebral layers. PMID:24407503

  1. Constraining physical parameters of ultra-fast outflows in PDS 456 with Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Hagino, K.; Odaka, H.; Done, C.; Gandhi, P.; Takahashi, T.

    2014-07-01

    Deep absorption lines with extremely high velocity of ˜0.3c observed in PDS 456 spectra strongly indicate the existence of ultra-fast outflows (UFOs). However, the launching and acceleration mechanisms of UFOs are still uncertain. One possible way to solve this is to constrain physical parameters as a function of distance from the source. In order to study the spatial dependence of parameters, it is essential to adopt 3-dimensional Monte Carlo simulations that treat radiation transfer in arbitrary geometry. We have developed a new simulation code of X-ray radiation reprocessed in AGN outflow. Our code implements radiative transfer in 3-dimensional biconical disk wind geometry, based on Monte Carlo simulation framework called MONACO (Watanabe et al. 2006, Odaka et al. 2011). Our simulations reproduce FeXXV and FeXXVI absorption features seen in the spectra. Also, broad Fe emission lines, which reflects the geometry and viewing angle, is successfully reproduced. By comparing the simulated spectra with Suzaku data, we obtained constraints on physical parameters. We discuss launching and acceleration mechanisms of UFOs in PDS 456 based on our analysis.

  2. Binocular optical axis parallelism detection precision analysis based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ying, Jiaju; Liu, Bingqi

    2018-02-01

    According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.

  3. Statistical hadronization and microcanonical ensemble

    DOE PAGES

    Becattini, F.; Ferroni, L.

    2004-01-01

    We present a Monte Carlo calculation of the microcanonical ensemble of the of the ideal hadron-resonance gas including all known states up to a mass of 1. 8 GeV, taking into account quantum statistics. The computing method is a development of a previous one based on a Metropolis Monte Carlo algorithm, with a the grand-canonical limit of the multi-species multiplicity distribution as proposal matrix. The microcanonical average multiplicities of the various hadron species are found to converge to the canonical ones for moderately low values of the total energy. This algorithm opens the way for event generators based for themore » statistical hadronization model.« less

  4. Efficient approach to the free energy of crystals via Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Navascués, G.; Velasco, E.

    2015-08-01

    We present a general approach to compute the absolute free energy of a system of particles with constrained center of mass based on the Monte Carlo thermodynamic coupling integral method. The version of the Frenkel-Ladd approach [J. Chem. Phys. 81, 3188 (1984)], 10.1063/1.448024, which uses a harmonic coupling potential, is recovered. Also, we propose a different choice, based on one-particle square-well coupling potentials, which is much simpler, more accurate, and free from some of the difficulties of the Frenkel-Ladd method. We apply our approach to hard spheres and compare with the standard harmonic method.

  5. A Monte-Carlo method which is not based on Markov chain algorithm, used to study electrostatic screening of ion potential

    NASA Astrophysics Data System (ADS)

    Šantić, Branko; Gracin, Davor

    2017-12-01

    A new simple Monte Carlo method is introduced for the study of electrostatic screening by surrounding ions. The proposed method is not based on the generally used Markov chain method for sample generation. Each sample is pristine and there is no correlation with other samples. As the main novelty, the pairs of ions are gradually added to a sample provided that the energy of each ion is within the boundaries determined by the temperature and the size of ions. The proposed method provides reliable results, as demonstrated by the screening of ion in plasma and in water.

  6. Concept of Fractal Dimension use of Multifractal Cloud Liquid Models Based on Real Data as Input to Monte Carlo Radiation Models

    NASA Technical Reports Server (NTRS)

    Wiscombe, W.

    1999-01-01

    The purpose of this paper is discuss the concept of fractal dimension; multifractal statistics as an extension of this; the use of simple multifractal statistics (power spectrum, structure function) to characterize cloud liquid water data; and to understand the use of multifractal cloud liquid water models based on real data as input to Monte Carlo radiation models of shortwave radiation transfer in 3D clouds, and the consequences of this in two areas: the design of aircraft field programs to measure cloud absorptance; and the explanation of the famous "Landsat scale break" in measured radiance.

  7. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.

  8. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.

  9. Kinetic Monte Carlo Simulation of Cation Diffusion in Low-K Ceramics

    NASA Technical Reports Server (NTRS)

    Good, Brian

    2013-01-01

    Low thermal conductivity (low-K) ceramic materials are of interest to the aerospace community for use as the thermal barrier component of coating systems for turbine engine components. In particular, zirconia-based materials exhibit both low thermal conductivity and structural stability at high temperature, making them suitable for such applications. Because creep is one of the potential failure modes, and because diffusion is a mechanism by which creep takes place, we have performed computer simulations of cation diffusion in a variety of zirconia-based low-K materials. The kinetic Monte Carlo simulation method is an alternative to the more widely known molecular dynamics (MD) method. It is designed to study "infrequent-event" processes, such as diffusion, for which MD simulation can be highly inefficient. We describe the results of kinetic Monte Carlo computer simulations of cation diffusion in several zirconia-based materials, specifically, zirconia doped with Y, Gd, Nb and Yb. Diffusion paths are identified, and migration energy barriers are obtained from density functional calculations and from the literature. We present results on the temperature dependence of the diffusivity, and on the effects of the presence of oxygen vacancies in cation diffusion barrier complexes as well.

  10. Response Matrix Monte Carlo for electron transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballinger, C.T.; Nielsen, D.E. Jr.; Rathkopf, J.A.

    1990-11-01

    A Response Matrix Monte Carol (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts tomore » combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. The combined effect of many collisions is modeled, like condensed history, except it is precalculated via an analog Monte Carol simulation. This avoids the scattering kernel assumptions associated with condensed history methods. Results show good agreement between the RMMC method and analog Monte Carlo. 11 refs., 7 figs., 1 tabs.« less

  11. Verification and Validation of Monte Carlo n-Particle Code 6 (MCNP6) with Neutron Protection Factor Measurements of an Iron Box

    DTIC Science & Technology

    2014-03-27

    VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR... PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR MEASUREMENTS OF AN IRON BOX THESIS Presented to the Faculty Department of Engineering...STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED iv AFIT-ENP-14-M-05 VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6

  12. Study of the Transition Flow Regime using Monte Carlo Methods

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1999-01-01

    This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.

  13. MUFFSgenMC: An Open Source MUon Flexible Framework for Spectral GENeration for Monte Carlo Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatzidakis, Stylianos; Greulich, Christopher

    A cosmic ray Muon Flexible Framework for Spectral GENeration for Monte Carlo Applications (MUFFSgenMC) has been developed to support state-of-the-art cosmic ray muon tomographic applications. The flexible framework allows for easy and fast creation of source terms for popular Monte Carlo applications like GEANT4 and MCNP. This code framework simplifies the process of simulations used for cosmic ray muon tomography.

  14. Rapid Monte Carlo Simulation of Gravitational Wave Galaxies

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2015-01-01

    With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.

  15. First Monte Carlo analysis of fragmentation functions from single-inclusive e + e - annihilation

    DOE PAGES

    Sato, Nobuo; Ethier, J. J.; Melnitchouk, W.; ...

    2016-12-02

    Here, we perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive $e^+ e^-$ annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well contrained by the data, and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.

  16. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.

    Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.

  17. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    DOE PAGES

    Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.

    2017-10-12

    Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.

  18. Skin fluorescence model based on the Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  19. Characterizing a proton beam scanning system for Monte Carlo dose calculation in patients

    NASA Astrophysics Data System (ADS)

    Grassberger, C.; Lomax, Anthony; Paganetti, H.

    2015-01-01

    The presented work has two goals. First, to demonstrate the feasibility of accurately characterizing a proton radiation field at treatment head exit for Monte Carlo dose calculation of active scanning patient treatments. Second, to show that this characterization can be done based on measured depth dose curves and spot size alone, without consideration of the exact treatment head delivery system. This is demonstrated through calibration of a Monte Carlo code to the specific beam lines of two institutions, Massachusetts General Hospital (MGH) and Paul Scherrer Institute (PSI). Comparison of simulations modeling the full treatment head at MGH to ones employing a parameterized phase space of protons at treatment head exit reveals the adequacy of the method for patient simulations. The secondary particle production in the treatment head is typically below 0.2% of primary fluence, except for low-energy electrons (<0.6 MeV for 230 MeV protons), whose contribution to skin dose is negligible. However, there is significant difference between the two methods in the low-dose penumbra, making full treatment head simulations necessary to study out-of-field effects such as secondary cancer induction. To calibrate the Monte Carlo code to measurements in a water phantom, we use an analytical Bragg peak model to extract the range-dependent energy spread at the two institutions, as this quantity is usually not available through measurements. Comparison of the measured with the simulated depth dose curves demonstrates agreement within 0.5 mm over the entire energy range. Subsequently, we simulate three patient treatments with varying anatomical complexity (liver, head and neck and lung) to give an example how this approach can be employed to investigate site-specific discrepancies between treatment planning system and Monte Carlo simulations.

  20. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367

  1. Characterizing a Proton Beam Scanning System for Monte Carlo Dose Calculation in Patients

    PubMed Central

    Grassberger, C; Lomax, Tony; Paganetti, H

    2015-01-01

    The presented work has two goals. First, to demonstrate the feasibility of accurately characterizing a proton radiation field at treatment head exit for Monte Carlo dose calculation of active scanning patient treatments. Second, to show that this characterization can be done based on measured depth dose curves and spot size alone, without consideration of the exact treatment head delivery system. This is demonstrated through calibration of a Monte Carlo code to the specific beam lines of two institutions, Massachusetts General Hospital (MGH) and Paul Scherrer Institute (PSI). Comparison of simulations modeling the full treatment head at MGH to ones employing a parameterized phase space of protons at treatment head exit reveals the adequacy of the method for patient simulations. The secondary particle production in the treatment head is typically below 0.2% of primary fluence, except for low–energy electrons (<0.6MeV for 230MeV protons), whose contribution to skin dose is negligible. However, there is significant difference between the two methods in the low-dose penumbra, making full treatment head simulations necessary to study out-of field effects such as secondary cancer induction. To calibrate the Monte Carlo code to measurements in a water phantom, we use an analytical Bragg peak model to extract the range-dependent energy spread at the two institutions, as this quantity is usually not available through measurements. Comparison of the measured with the simulated depth dose curves demonstrates agreement within 0.5mm over the entire energy range. Subsequently, we simulate three patient treatments with varying anatomical complexity (liver, head and neck and lung) to give an example how this approach can be employed to investigate site-specific discrepancies between treatment planning system and Monte Carlo simulations. PMID:25549079

  2. COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. R. MARTIN; F. B. BROWN

    2001-03-01

    Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Montemore » Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.« less

  3. The use of Monte Carlo simulations for accurate dose determination with thermoluminescence dosemeters in radiation therapy beams.

    PubMed

    Mobit, P

    2002-01-01

    The energy responses of LiF-TLDs irradiated in megavoltage electron and photon beams have been determined experimentally by many investigators over the past 35 years but the results vary considerably. General cavity theory has been used to model some of the experimental findings but the predictions of these cavity theories differ from each other and from measurements by more than 13%. Recently, two groups or investigators using Monte Carlo simulations and careful experimental techniques showed that the energy response of 1 mm or 2 mm thick LiF-TLD irradiated by megavoltage photon and electron beams is not more than 5% less than unity for low-Z phantom materials like water or Perspex. However, when the depth of irradiation is significantly different from dmax and the TLD size is more than 5 mm, then the energy response is up to 12% less than unity for incident electron beams. Monte Carlo simulations of some of the experiments reported in the literature showed that some of the contradictory experimental results are reproducible with Monte Carlo simulations. Monte Carlo simulations show that the energy response of LiF-TLDs depends on the size of detector used in electron beams, the depth of irradiation and the incident electron energy. Other differences can be attributed to absolute dose determination and precision of the TL technique. Monte Carlo simulations have also been used to evaluate some of the published general cavity theories. The results show that some of the parameters used to evaluate Burlin's general cavity theory are wrong by factor of 3. Despite this, the estimation of the energy response for most clinical situations using Burlin's cavity equation agrees with Monte Carlo simulations within 1%.

  4. [Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy].

    PubMed

    Renner, Franziska

    2016-09-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.

  5. TASEP of interacting particles of arbitrary size

    NASA Astrophysics Data System (ADS)

    Narasimhan, S. L.; Baumgaertner, A.

    2017-10-01

    A mean-field description of the stationary state behaviour of interacting k-mers performing totally asymmetric exclusion processes (TASEP) on an open lattice segment is presented employing the discrete Takahashi formalism. It is shown how the maximal current and the phase diagram, including triple-points, depend on the strength of repulsive and attractive interactions. We compare the mean-field results with Monte Carlo simulation of three types interacting k-mers: monomers, dimers and trimers. (a) We find that the Takahashi estimates of the maximal current agree quantitatively with those of the Monte Carlo simulation in the absence of interaction as well as in both the the attractive and the strongly repulsive regimes. However, theory and Monte Carlo results disagree in the range of weak repulsion, where the Takahashi estimates of the maximal current show a monotonic behaviour, whereas the Monte Carlo data show a peaking behaviour. It is argued that the peaking of the maximal current is due to a correlated motion of the particles. In the limit of very strong repulsion the theory predicts a universal behavior: th maximal currents of k-mers correspond to that of non-interacting (k+1) -mers; (b) Monte Carlo estimates of the triple-points for monomers, dimers and trimers show an interesting general behaviour : (i) the phase boundaries α * and β* for entry and exit current, respectively, as function of interaction strengths show maxima for α* whereas β * exhibit minima at the same strength; (ii) in the attractive regime, however, the trend is reversed (β * > α * ). The Takahashi estimates of the triple-point for monomers show a similar trend as the Monte Carlo data except for the peaking of α * ; for dimers and trimers, however, the Takahashi estimates show an opposite trend as compared to the Monte Carlo data.

  6. RNA folding kinetics using Monte Carlo and Gillespie algorithms.

    PubMed

    Clote, Peter; Bayegan, Amir H

    2018-04-01

    RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .

  7. PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation

    NASA Astrophysics Data System (ADS)

    España, S; Herraiz, J L; Vicente, E; Vaquero, J J; Desco, M; Udias, J M

    2009-03-01

    Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.

  8. Generating moment matching scenarios using optimization techniques

    DOE PAGES

    Mehrotra, Sanjay; Papp, Dávid

    2013-05-16

    An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem thatmore » is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.« less

  9. Effect of the multiple scattering of electrons in Monte Carlo simulation of LINACS.

    PubMed

    Vilches, Manuel; García-Pareja, Salvador; Guerrero, Rafael; Anguiano, Marta; Lallena, Antonio M

    2008-01-01

    Results obtained from Monte Carlo simulations of the transport of electrons in thin slabs of dense material media and air slabs with different widths are analyzed. Various general purpose Monte Carlo codes have been used: PENELOPE, GEANT3, GEANT4, EGSNRC, MCNPX. Non-negligible differences between the angular and radial distributions after the slabs have been found. The effects of these differences on the depth doses measured in water are also discussed.

  10. Monte Carlos of the new generation: status and progress

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frixione, Stefano

    2005-03-22

    Standard parton shower monte carlos are designed to give reliable descriptions of low-pT physics. In the very high-energy regime of modern colliders, this is may lead to largely incorrect predictions of the basic reaction processes. This motivated the recent theoretical efforts aimed at improving monte carlos through the inclusion of matrix elements computed beyond the leading order in QCD. I briefly review the progress made, and discuss bottom production at the Tevatron.

  11. Monte Carlo simulation of aorta autofluorescence

    NASA Astrophysics Data System (ADS)

    Kuznetsova, A. A.; Pushkareva, A. E.

    2016-08-01

    Results of numerical simulation of autofluorescence of the aorta by the method of Monte Carlo are reported. Two states of the aorta, normal and with atherosclerotic lesions, are studied. A model of the studied tissue is developed on the basis of information about optical, morphological, and physico-chemical properties. It is shown that the data obtained by numerical Monte Carlo simulation are in good agreement with experimental results indicating adequacy of the developed model of the aorta autofluorescence.

  12. WARP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergmann, Ryan M.; Rowland, Kelly L.

    2017-04-12

    WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less

  13. Combined experimental and Monte Carlo verification of brachytherapy plans for vaginal applicators

    NASA Astrophysics Data System (ADS)

    Sloboda, Ron S.; Wang, Ruqing

    1998-12-01

    Dose rates in a phantom around a shielded and an unshielded vaginal applicator containing Selectron low-dose-rate sources were determined by experiment and Monte Carlo simulation. Measurements were performed with thermoluminescent dosimeters in a white polystyrene phantom using an experimental protocol geared for precision. Calculations for the same set-up were done using a version of the EGS4 Monte Carlo code system modified for brachytherapy applications into which a new combinatorial geometry package developed by Bielajew was recently incorporated. Measured dose rates agree with Monte Carlo estimates to within 5% (1 SD) for the unshielded applicator, while highlighting some experimental uncertainties for the shielded applicator. Monte Carlo calculations were also done to determine a value for the effective transmission of the shield required for clinical treatment planning, and to estimate the dose rate in water at points in axial and sagittal planes transecting the shielded applicator. Comparison with dose rates generated by the planning system indicates that agreement is better than 5% (1 SD) at most positions. The precision thermoluminescent dosimetry protocol and modified Monte Carlo code are effective complementary tools for brachytherapy applicator dosimetry.

  14. Monte Carlo modelling the dosimetric effects of electrode material on diamond detectors.

    PubMed

    Baluti, Florentina; Deloar, Hossain M; Lansley, Stuart P; Meyer, Juergen

    2015-03-01

    Diamond detectors for radiation dosimetry were modelled using the EGSnrc Monte Carlo code to investigate the influence of electrode material and detector orientation on the absorbed dose. The small dimensions of the electrode/diamond/electrode detector structure required very thin voxels and the use of non-standard DOSXYZnrc Monte Carlo model parameters. The interface phenomena was investigated by simulating a 6 MV beam and detectors with different electrode materials, namely Al, Ag, Cu and Au, with thickens of 0.1 µm for the electrodes and 0.1 mm for the diamond, in both perpendicular and parallel detector orientation with regards to the incident beam. The smallest perturbations were observed for the parallel detector orientation and Al electrodes (Z = 13). In summary, EGSnrc Monte Carlo code is well suited for modelling small detector geometries. The Monte Carlo model developed is a useful tool to investigate the dosimetric effects caused by different electrode materials. To minimise perturbations cause by the detector electrodes, it is recommended that the electrodes should be made from a low-atomic number material and placed parallel to the beam direction.

  15. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less

  16. The structure of liquid water by polarized neutron diffraction and reverse Monte Carlo modelling.

    PubMed

    Temleitner, László; Pusztai, László; Schweika, Werner

    2007-08-22

    The coherent static structure factor of water has been investigated by polarized neutron diffraction. Polarization analysis allows us to separate the huge incoherent scattering background from hydrogen and to obtain high quality data of the coherent scattering from four different mixtures of liquid H(2)O and D(2)O. The information obtained by the variation of the scattering contrast confines the configurational space of water and is used by the reverse Monte Carlo technique to model the total structure factors. Structural characteristics have been calculated directly from the resulting sets of particle coordinates. Consistency with existing partial pair correlation functions, derived without the application of polarized neutrons, was checked by incorporating them into our reverse Monte Carlo calculations. We also performed Monte Carlo simulations of a hard sphere system, which provides an accurate estimate of the information content of the measured data. It is shown that the present combination of polarized neutron scattering and reverse Monte Carlo structural modelling is a promising approach towards a detailed understanding of the microscopic structure of water.

  17. Monte Carlo methods for multidimensional integration for European option pricing

    NASA Astrophysics Data System (ADS)

    Todorov, V.; Dimov, I. T.

    2016-10-01

    In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.

  18. MO-FG-BRA-01: 4D Monte Carlo Simulations for Verification of Dose Delivered to a Moving Anatomy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gholampourkashi, S; Cygler, J E.; The Ottawa Hospital Cancer Centre, Ottawa, ON

    Purpose: To validate 4D Monte Carlo (MC) simulations of dose delivery by an Elekta Agility linear accelerator to a moving phantom. Methods: Monte Carlo simulations were performed using the 4DdefDOSXYZnrc/EGSnrc user code which samples a new geometry for each incident particle and calculates the dose in a continuously moving anatomy. A Quasar respiratory motion phantom with a lung insert containing a 3 cm diameter tumor was used for dose measurements on an Elekta Agility linac with the phantom in stationary and moving states. Dose to the center of tumor was measured using calibrated EBT3 film and the RADPOS 4D dosimetrymore » system. A VMAT plan covering the tumor was created on the static CT scan of the phantom using Monaco V.5.10.02. A validated BEAMnrc model of our Elekta Agility linac was used for Monte Carlo simulations on stationary and moving anatomies. To compare the planned and delivered doses, linac log files recorded during measurements were used for the simulations. For 4D simulations, deformation vectors that modeled the rigid translation of the lung insert were generated as input to the 4DdefDOSXYZnrc code as well as the phantom motion trace recorded with RADPOS during the measurements. Results: Monte Carlo simulations and film measurements were found to agree within 2mm/2% for 97.7% of points in the film in the static phantom and 95.5% in the moving phantom. Dose values based on film and RADPOS measurements are within 2% of each other and within 2σ of experimental uncertainties with respect to simulations. Conclusion: Our 4D Monte Carlo simulation using the defDOSXYZnrc code accurately calculates dose delivered to a moving anatomy. Future work will focus on more investigation of VMAT delivery on a moving phantom to improve the agreement between simulation and measurements, as well as establishing the accuracy of our method in a deforming anatomy. This work was supported by the Ontario Consortium of Adaptive Interventions in Radiation Oncology (OCAIRO), funded by the Ontario Research Fund Research Excellence program.« less

  19. Development of the MCNPX depletion capability: A Monte Carlo linked depletion method that automates the coupling between MCNPX and CINDER90 for high fidelity burnup calculations

    NASA Astrophysics Data System (ADS)

    Fensin, Michael Lorne

    Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established and supported radiation transport code, for further development of a Monte Carlo-linked depletion methodology which is essential to the future development of advanced reactor technologies that exceed the limitations of current deterministic based methods.

  20. Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories

    NASA Technical Reports Server (NTRS)

    Olds, John; Way, David

    2001-01-01

    Recently, strong evidence of liquid water under the surface of Mars and a meteorite that might contain ancient microbes have renewed interest in Mars exploration. With this renewed interest, NASA plans to send spacecraft to Mars approx. every 26 months. These future spacecraft will return higher-resolution images, make precision landings, engage in longer-ranging surface maneuvers, and even return Martian soil and rock samples to Earth. Future robotic missions and any human missions to Mars will require precise entries to ensure safe landings near science objective and pre-employed assets. Potential sources of water and other interesting geographic features are often located near hazards, such as within craters or along canyon walls. In order for more accurate landings to be made, spacecraft entering the Martian atmosphere need to use lift to actively control the entry. This active guidance results in much smaller landing footprints. Planning for these missions will depend heavily on Monte Carlo analysis. Monte Carlo trajectory simulations have been used with a high degree of success in recent planetary exploration missions. These analyses ascertain the impact of off-nominal conditions during a flight and account for uncertainty. Uncertainties generally stem from limitations in manufacturing tolerances, measurement capabilities, analysis accuracies, and environmental unknowns. Thousands of off-nominal trajectories are simulated by randomly dispersing uncertainty variables and collecting statistics on forecast variables. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecasts outputs. It lacks a mechanism to affect or alter the uncertainties based on the forecast results. If the results are unacceptable, the current practice is to use an iterative, trial-and-error approach to reconcile discrepancies. Therefore, an improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively.

  1. Monte Carlo tree search -based non-coplanar trajectory design for station parameter optimized radiation therapy (SPORT).

    PubMed

    Dong, Peng; Liu, Hongcheng; Xing, Lei

    2018-06-04

    An important yet challenging problem in LINAC-based rotational arc radiation therapy is the design of beam trajectory, which requires simultaneous consideration of delivery efficiency and final dose distribution. In this work, we propose a novel trajectory selection strategy by developing a Monte Carlo tree search (MCTS) algorithm during the beam trajectory selection process. Methods: To search through the vast number of possible trajectories, MCTS algorithm was implemented. In this approach, a candidate trajectory is explored by starting from a leaf node and sequentially examining the next level of linked nodes with consideration of geometric and physical constraints. The maximum Upper Confidence Bounds for Trees, which is a function of average objective function value and the number of times the node under testing has been visited, was employed to intelligently select the trajectory. For each candidate trajectory, we run an inverse fluence map optimization with an infinity norm regularization. The ranking of the plan as measured by the corresponding objective function value was then fed back to update the statistics of the nodes on the trajectory. The method was evaluated with a chest wall and a brain case, and the results were compared with the coplanar and noncoplanar 4pi beam configurations. Results: For both clinical cases, the MCTS method found effective and easy-to-deliver trajectories within an hour. As compared with the coplanar plans, it offers much better sparing of the OARs while maintaining the PTV coverage. The quality of the MCTS-generated plan is found to be comparable to the 4pi plans. Conclusion: AI based on MCTS is valuable to facilitate the design of beam trajectory and paves the way for future clinical use of non-coplanar treatment delivery. . © 2018 Institute of Physics and Engineering in Medicine.

  2. Quantum speedup of Monte Carlo methods.

    PubMed

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  3. Self-Learning Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang

    Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of general and efficient update algorithm for large size systems close to phase transition or with strong frustrations, for which local updates perform badly. In this work, we propose a new general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup. This work is supported by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0010526.

  4. Using MathCad to Evaluate Exact Integral Formulations of Spacecraft Orbital Heats for Primitive Surfaces at Any Orientation

    NASA Technical Reports Server (NTRS)

    Pinckney, John

    2010-01-01

    With the advent of high speed computing Monte Carlo ray tracing techniques has become the preferred method for evaluating spacecraft orbital heats. Monte Carlo has its greatest advantage where there are many interacting surfaces. However Monte Carlo programs are specialized programs that suffer from some inaccuracy, long calculation times and high purchase cost. A general orbital heating integral is presented here that is accurate, fast and runs on MathCad, a generally available engineering mathematics program. The integral is easy to read, understand and alter. The integral can be applied to unshaded primitive surfaces at any orientation. The method is limited to direct heating calculations. This integral formulation can be used for quick orbit evaluations and spot checking Monte Carlo results.

  5. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOEpatents

    Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.

    1999-01-01

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.

  6. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  7. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOEpatents

    Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.

    1999-02-09

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.

  8. On the modeling of the 2010 Gulf of Mexico Oil Spill

    NASA Astrophysics Data System (ADS)

    Mariano, A. J.; Kourafalou, V. H.; Srinivasan, A.; Kang, H.; Halliwell, G. R.; Ryan, E. H.; Roffer, M.

    2011-09-01

    Two oil particle trajectory forecasting systems were developed and applied to the 2010 Deepwater Horizon Oil Spill in the Gulf of Mexico. Both systems use ocean current fields from high-resolution numerical ocean circulation model simulations, Lagrangian stochastic models to represent unresolved sub-grid scale variability to advect oil particles, and Monte Carlo-based schemes for representing uncertain biochemical and physical processes. The first system assumes two-dimensional particle motion at the ocean surface, the oil is in one state, and the particle removal is modeled as a Monte Carlo process parameterized by a one number removal rate. Oil particles are seeded using both initial conditions based on observations and particles released at the location of the Maconda well. The initial conditions (ICs) of oil particle location for the two-dimensional surface oil trajectory forecasts are based on a fusing of all available information including satellite-based analyses. The resulting oil map is digitized into a shape file within which a polygon filling software generates longitude and latitude with variable particle density depending on the amount of oil present in the observations for the IC. The more complex system assumes three (light, medium, heavy) states for the oil, each state has a different removal rate in the Monte Carlo process, three-dimensional particle motion, and a particle size-dependent oil mixing model. Simulations from the two-dimensional forecast system produced results that qualitatively agreed with the uncertain "truth" fields. These simulations validated the use of our Monte Carlo scheme for representing oil removal by evaporation and other weathering processes. Eulerian velocity fields for predicting particle motion from data-assimilative models produced better particle trajectory distributions than a free running model with no data assimilation. Monte Carlo simulations of the three-dimensional oil particle trajectory, whose ensembles were generated by perturbing the size of the oil particles and the fraction in a given size range that are released at depth, the two largest unknowns in this problem. 36 realizations of the model were run with only subsurface oil releases. An average of these results yields that after three months, about 25% of the oil remains in the water column and that most of the oil is below 800 m.

  9. IMRT head and neck treatment planning with a commercially available Monte Carlo based planning system

    NASA Astrophysics Data System (ADS)

    Boudreau, C.; Heath, E.; Seuntjens, J.; Ballivy, O.; Parker, W.

    2005-03-01

    The PEREGRINE Monte Carlo dose-calculation system (North American Scientific, Cranberry Township, PA) is the first commercially available Monte Carlo dose-calculation code intended specifically for intensity modulated radiotherapy (IMRT) treatment planning and quality assurance. In order to assess the impact of Monte Carlo based dose calculations for IMRT clinical cases, dose distributions for 11 head and neck patients were evaluated using both PEREGRINE and the CORVUS (North American Scientific, Cranberry Township, PA) finite size pencil beam (FSPB) algorithm with equivalent path-length (EPL) inhomogeneity correction. For the target volumes, PEREGRINE calculations predict, on average, a less than 2% difference in the calculated mean and maximum doses to the gross tumour volume (GTV) and clinical target volume (CTV). An average 16% ± 4% and 12% ± 2% reduction in the volume covered by the prescription isodose line was observed for the GTV and CTV, respectively. Overall, no significant differences were noted in the doses to the mandible and spinal cord. For the parotid glands, PEREGRINE predicted a 6% ± 1% increase in the volume of tissue receiving a dose greater than 25 Gy and an increase of 4% ± 1% in the mean dose. Similar results were noted for the brainstem where PEREGRINE predicted a 6% ± 2% increase in the mean dose. The observed differences between the PEREGRINE and CORVUS calculated dose distributions are attributed to secondary electron fluence perturbations, which are not modelled by the EPL correction, issues of organ outlining, particularly in the vicinity of air cavities, and differences in dose reporting (dose to water versus dose to tissue type).

  10. Multiscale Monte Carlo equilibration: Pure Yang-Mills theory

    DOE PAGES

    Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; ...

    2015-12-29

    In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.

  11. A Monte Carlo simulation study of associated liquid crystals

    NASA Astrophysics Data System (ADS)

    Berardi, R.; Fehervari, M.; Zannoni, C.

    We have performed a Monte Carlo simulation study of a system of ellipsoidal particles with donor-acceptor sites modelling complementary hydrogen-bonding groups in real molecules. We have considered elongated Gay-Berne particles with terminal interaction sites allowing particles to associate and form dimers. The changes in the phase transitions and in the molecular organization and the interplay between orientational ordering and dimer formation are discussed. Particle flip and dimer moves have been used to increase the convergency rate of the Monte Carlo (MC) Markov chain.

  12. PEPSI — a Monte Carlo generator for polarized leptoproduction

    NASA Astrophysics Data System (ADS)

    Mankiewicz, L.; Schäfer, A.; Veltri, M.

    1992-09-01

    We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.

  13. NUEN-618 Class Project: Actually Implicit Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, R. M.; Brunner, T. A.

    2017-12-14

    This research describes a new method for the solution of the thermal radiative transfer (TRT) equations that is implicit in time which will be called Actually Implicit Monte Carlo (AIMC). This section aims to introduce the TRT equations, as well as the current workhorse method which is known as Implicit Monte Carlo (IMC). As the name of the method proposed here indicates, IMC is a misnomer in that it is only semi-implicit, which will be shown in this section as well.

  14. Optical dosimetry probes to validate Monte Carlo and empirical-method-based NIR dose planning in the brain.

    PubMed

    Verleker, Akshay Prabhu; Shaffer, Michael; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M

    2016-12-01

    A three-dimensional photon dosimetry in tissues is critical in designing optical therapeutic protocols to trigger light-activated drug release. The objective of this study is to investigate the feasibility of a Monte Carlo-based optical therapy planning software by developing dosimetry tools to characterize and cross-validate the local photon fluence in brain tissue, as part of a long-term strategy to quantify the effects of photoactivated drug release in brain tumors. An existing GPU-based 3D Monte Carlo (MC) code was modified to simulate near-infrared photon transport with differing laser beam profiles within phantoms of skull bone (B), white matter (WM), and gray matter (GM). A novel titanium-based optical dosimetry probe with isotropic acceptance was used to validate the local photon fluence, and an empirical model of photon transport was developed to significantly decrease execution time for clinical application. Comparisons between the MC and the dosimetry probe measurements were on an average 11.27%, 13.25%, and 11.81% along the illumination beam axis, and 9.4%, 12.06%, 8.91% perpendicular to the beam axis for WM, GM, and B phantoms, respectively. For a heterogeneous head phantom, the measured % errors were 17.71% and 18.04% along and perpendicular to beam axis. The empirical algorithm was validated by probe measurements and matched the MC results (R20.99), with average % error of 10.1%, 45.2%, and 22.1% relative to probe measurements, and 22.6%, 35.8%, and 21.9% relative to the MC, for WM, GM, and B phantoms, respectively. The simulation time for the empirical model was 6 s versus 8 h for the GPU-based Monte Carlo for a head phantom simulation. These tools provide the capability to develop and optimize treatment plans for optimal release of pharmaceuticals in the treatment of cancer. Future work will test and validate these novel delivery and release mechanisms in vivo.

  15. High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin

    2014-06-01

    Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.

  16. Subtle Monte Carlo Updates in Dense Molecular Systems.

    PubMed

    Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper

    2012-02-14

    Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.

  17. OWL: A scalable Monte Carlo simulation suite for finite-temperature study of materials

    NASA Astrophysics Data System (ADS)

    Li, Ying Wai; Yuk, Simuck F.; Cooper, Valentino R.; Eisenbach, Markus; Odbadrakh, Khorgolkhuu

    The OWL suite is a simulation package for performing large-scale Monte Carlo simulations. Its object-oriented, modular design enables it to interface with various external packages for energy evaluations. It is therefore applicable to study the finite-temperature properties for a wide range of systems: from simple classical spin models to materials where the energy is evaluated by ab initio methods. This scheme not only allows for the study of thermodynamic properties based on first-principles statistical mechanics, it also provides a means for massive, multi-level parallelism to fully exploit the capacity of modern heterogeneous computer architectures. We will demonstrate how improved strong and weak scaling is achieved by employing novel, parallel and scalable Monte Carlo algorithms, as well as the applications of OWL to a few selected frontier materials research problems. This research was supported by the Office of Science of the Department of Energy under contract DE-AC05-00OR22725.

  18. Comparison of UWCC MOX fuel measurements to MCNP-REN calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, M.; Baker, M.; Jie, R.

    1998-12-31

    The development of neutron coincidence counting has greatly improved the accuracy and versatility of neutron-based techniques to assay fissile materials. Today, the shift register analyzer connected to either a passive or active neutron detector is widely used by both domestic and international safeguards organizations. The continued development of these techniques and detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model, as it is currently used, fails to accurately predict detector response in highly multiplying mediums such as mixed-oxide (MOX) lightmore » water reactor fuel assemblies. For this reason, efforts have been made to modify the currently used Monte Carlo codes and to develop new analytical methods so that this model is not required to predict detector response. The authors describe their efforts to modify a widely used Monte Carlo code for this purpose and also compare calculational results with experimental measurements.« less

  19. N-(sulfoethyl) iminodiacetic acid-based lanthanide coordination polymers: Synthesis, magnetism and quantum Monte Carlo studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhuang Guilin, E-mail: glzhuang@zjut.edu.cn; Chen Wulin; Zheng Jun

    2012-08-15

    A series of lanthanide coordination polymers have been obtained through the hydrothermal reaction of N-(sulfoethyl) iminodiacetic acid (H{sub 3}SIDA) and Ln(NO{sub 3}){sub 3} (Ln=La, 1; Pr, 2; Nd, 3; Gd, 4). Crystal structure analysis exhibits that lanthanide ions affect the coordination number, bond length and dimension of compounds 1-4, which reveal that their structure diversity can be attributed to the effect of lanthanide contraction. Furthermore, the combination of magnetic measure with quantum Monte Carlo(QMC) studies exhibits that the coupling parameters between two adjacent Gd{sup 3+} ions for anti-anti and syn-anti carboxylate bridges are -1.0 Multiplication-Sign 10{sup -3} and -5.0 Multiplication-Signmore » 10{sup -3} cm{sup -1}, respectively, which reveals weak antiferromagnetic interaction in 4. - Graphical abstract: Four lanthanide coordination polymers with N-(sulfoethyl) iminodiacetic acid were obtained under hydrothermal condition and reveal the weak antiferromagnetic coupling between two Gd{sup 3+} ions by Quantum Monte Carlo studies. Highlights: Black-Right-Pointing-Pointer Four lanthanide coordination polymers of H{sub 3}SIDA ligand were obtained. Black-Right-Pointing-Pointer Lanthanide ions play an important role in their structural diversity. Black-Right-Pointing-Pointer Magnetic measure exhibits that compound 4 features antiferromagnetic property. Black-Right-Pointing-Pointer Quantum Monte Carlo studies reveal the coupling parameters of two Gd{sup 3+} ions.« less

  20. Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling

    NASA Astrophysics Data System (ADS)

    Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.

    2018-02-01

    A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.

  1. A Fast Monte Carlo Simulation for the International Linear Collider Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furse, D.; /Georgia Tech

    2005-12-15

    The following paper contains details concerning the motivation for, implementation and performance of a Java-based fast Monte Carlo simulation for a detector designed to be used in the International Linear Collider. This simulation, presently included in the SLAC ILC group's org.lcsim package, reads in standard model or SUSY events in STDHEP file format, stochastically simulates the blurring in physics measurements caused by intrinsic detector error, and writes out an LCIO format file containing a set of final particles statistically similar to those that would have found by a full Monte Carlo simulation. In addition to the reconstructed particles themselves, descriptionsmore » of the calorimeter hit clusters and tracks that these particles would have produced are also included in the LCIO output. These output files can then be put through various analysis codes in order to characterize the effectiveness of a hypothetical detector at extracting relevant physical information about an event. Such a tool is extremely useful in preliminary detector research and development, as full simulations are extremely cumbersome and taxing on processor resources; a fast, efficient Monte Carlo can facilitate and even make possible detector physics studies that would be very impractical with the full simulation by sacrificing what is in many cases inappropriate attention to detail for valuable gains in time required for results.« less

  2. Development and validation of MCNPX-based Monte Carlo treatment plan verification system

    PubMed Central

    Jabbari, Iraj; Monadi, Shahram

    2015-01-01

    A Monte Carlo treatment plan verification (MCTPV) system was developed for clinical treatment plan verification (TPV), especially for the conformal and intensity-modulated radiotherapy (IMRT) plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT) format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D) diode array (MapCHECK2) and gamma index analysis were used. The gamma passing rate (3%/3 mm) of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%). The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan. PMID:26170554

  3. Million-body star cluster simulations: comparisons between Monte Carlo and direct N-body

    NASA Astrophysics Data System (ADS)

    Rodriguez, Carl L.; Morscher, Meagan; Wang, Long; Chatterjee, Sourav; Rasio, Frederic A.; Spurzem, Rainer

    2016-12-01

    We present the first detailed comparison between million-body globular cluster simulations computed with a Hénon-type Monte Carlo code, CMC, and a direct N-body code, NBODY6++GPU. Both simulations start from an identical cluster model with 106 particles, and include all of the relevant physics needed to treat the system in a highly realistic way. With the two codes `frozen' (no fine-tuning of any free parameters or internal algorithms of the codes) we find good agreement in the overall evolution of the two models. Furthermore, we find that in both models, large numbers of stellar-mass black holes (>1000) are retained for 12 Gyr. Thus, the very accurate direct N-body approach confirms recent predictions that black holes can be retained in present-day, old globular clusters. We find only minor disagreements between the two models and attribute these to the small-N dynamics driving the evolution of the cluster core for which the Monte Carlo assumptions are less ideal. Based on the overwhelming general agreement between the two models computed using these vastly different techniques, we conclude that our Monte Carlo approach, which is more approximate, but dramatically faster compared to the direct N-body, is capable of producing an accurate description of the long-term evolution of massive globular clusters even when the clusters contain large populations of stellar-mass black holes.

  4. Exploring first-order phase transitions with population annealing

    NASA Astrophysics Data System (ADS)

    Barash, Lev Yu.; Weigel, Martin; Shchur, Lev N.; Janke, Wolfhard

    2017-03-01

    Population annealing is a hybrid of sequential and Markov chain Monte Carlo methods geared towards the efficient parallel simulation of systems with complex free-energy landscapes. Systems with first-order phase transitions are among the problems in computational physics that are difficult to tackle with standard methods such as local-update simulations in the canonical ensemble, for example with the Metropolis algorithm. It is hence interesting to see whether such transitions can be more easily studied using population annealing. We report here our preliminary observations from population annealing runs for the two-dimensional Potts model with q > 4, where it undergoes a first-order transition.

  5. Initial Assessment of a Rapid Method of Calculating CEV Environmental Heating

    NASA Technical Reports Server (NTRS)

    Pickney, John T.; Milliken, Andrew H.

    2010-01-01

    An innovative method for rapidly calculating spacecraft environmental absorbed heats in planetary orbit is described. The method employs reading a database of pre-calculated orbital absorbed heats and adjusting those heats for desired orbit parameters. The approach differs from traditional Monte Carlo methods that are orbit based with a planet centered coordinate system. The database is based on a spacecraft centered coordinated system where the range of all possible sun and planet look angles are evaluated. In an example case 37,044 orbit configurations were analyzed for average orbital heats on selected spacecraft surfaces. Calculation time was under 2 minutes while a comparable Monte Carlo evaluation would have taken an estimated 26 hours

  6. Monte Carlo based statistical power analysis for mediation models: methods and software.

    PubMed

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  7. Modeling the migration of platinum nanoparticles on surfaces using a kinetic Monte Carlo approach

    DOE PAGES

    Li, Lin; Plessow, Philipp N.; Rieger, Michael; ...

    2017-02-15

    We propose a kinetic Monte Carlo (kMC) model for simulating the movement of platinum particles on supports, based on atom-by-atom diffusion on the surface of the particle. The proposed model was able to reproduce equilibrium cluster shapes predicted using Wulff-construction. The diffusivity of platinum particles was simulated both purely based on random motion and assisted using an external field that causes a drift velocity. The overall particle diffusivity increases with temperature; however, the extracted activation barrier appears to be temperature independent. Additionally, this barrier was found to increase with particle size, as well as, with the adhesion between the particlemore » and the support.« less

  8. Neutrino oscillation parameter sampling with MonteCUBES

    NASA Astrophysics Data System (ADS)

    Blennow, Mattias; Fernandez-Martinez, Enrique

    2010-01-01

    We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].

  9. Building occupancy simulation and data assimilation using a graph-based agent-oriented model

    NASA Astrophysics Data System (ADS)

    Rai, Sanish; Hu, Xiaolin

    2018-07-01

    Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.

  10. Radiative transport produced by oblique illumination of turbid media with collimated beams

    NASA Astrophysics Data System (ADS)

    Gardner, Adam R.; Kim, Arnold D.; Venugopalan, Vasan

    2013-06-01

    We examine the general problem of light transport initiated by oblique illumination of a turbid medium with a collimated beam. This situation has direct relevance to the analysis of cloudy atmospheres, terrestrial surfaces, soft condensed matter, and biological tissues. We introduce a solution approach to the equation of radiative transfer that governs this problem, and develop a comprehensive spherical harmonics expansion method utilizing Fourier decomposition (SHEFN). The SHEFN approach enables the solution of problems lacking azimuthal symmetry and provides both the spatial and directional dependence of the radiance. We also introduce the method of sequential-order smoothing that enables the calculation of accurate solutions from the results of two sequential low-order approximations. We apply the SHEFN approach to determine the spatial and angular dependence of both internal and boundary radiances from strongly and weakly scattering turbid media. These solutions are validated using more costly Monte Carlo simulations and reveal important insights regarding the evolution of the radiant field generated by oblique collimated beams spanning ballistic and diffusely scattering regimes.

  11. Footprints of electron correlation in strong-field double ionization of Kr close to the sequential-ionization regime

    NASA Astrophysics Data System (ADS)

    Li, Xiaokai; Wang, Chuncheng; Yuan, Zongqiang; Ye, Difa; Ma, Pan; Hu, Wenhui; Luo, Sizuo; Fu, Libin; Ding, Dajun

    2017-09-01

    By combining kinematically complete measurements and a semiclassical Monte Carlo simulation we study the correlated-electron dynamics in the strong-field double ionization of Kr. Interestingly, we find that, as we step into the sequential-ionization regime, there are still signatures of correlation in the two-electron joint momentum spectrum and, more intriguingly, the scaling law of the high-energy tail is completely different from early predictions on the low-Z atom (He). These experimental observations are well reproduced by our generalized semiclassical model adapting a Green-Sellin-Zachor potential. It is revealed that the competition between the screening effect of inner-shell electrons and the Coulomb focusing of nuclei leads to a non-inverse-square central force, which twists the returned electron trajectory at the vicinity of the parent core and thus significantly increases the probability of hard recollisions between two electrons. Our results might have promising applications ranging from accurately retrieving atomic structures to simulating celestial phenomena in the laboratory.

  12. Analytical Applications of Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    Guell, Oscar A.; Holcombe, James A.

    1990-01-01

    Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)

  13. Monte Carlo simulation of proton track structure in biological matter

    DOE PAGES

    Quinto, Michele A.; Monti, Juan M.; Weck, Philippe F.; ...

    2017-05-25

    Here, understanding the radiation-induced effects at the cellular and subcellular levels remains crucial for predicting the evolution of irradiated biological matter. In this context, Monte Carlo track-structure simulations have rapidly emerged among the most suitable and powerful tools. However, most existing Monte Carlo track-structure codes rely heavily on the use of semi-empirical cross sections as well as water as a surrogate for biological matter. In the current work, we report on the up-to-date version of our homemade Monte Carlo code TILDA-V – devoted to the modeling of the slowing-down of 10 keV–100 MeV protons in both water and DNA –more » where the main collisional processes are described by means of an extensive set of ab initio differential and total cross sections.« less

  14. Monte Carlo simulation of proton track structure in biological matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinto, Michele A.; Monti, Juan M.; Weck, Philippe F.

    Here, understanding the radiation-induced effects at the cellular and subcellular levels remains crucial for predicting the evolution of irradiated biological matter. In this context, Monte Carlo track-structure simulations have rapidly emerged among the most suitable and powerful tools. However, most existing Monte Carlo track-structure codes rely heavily on the use of semi-empirical cross sections as well as water as a surrogate for biological matter. In the current work, we report on the up-to-date version of our homemade Monte Carlo code TILDA-V – devoted to the modeling of the slowing-down of 10 keV–100 MeV protons in both water and DNA –more » where the main collisional processes are described by means of an extensive set of ab initio differential and total cross sections.« less

  15. Exploring cluster Monte Carlo updates with Boltzmann machines

    NASA Astrophysics Data System (ADS)

    Wang, Lei

    2017-11-01

    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  16. NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media

    NASA Astrophysics Data System (ADS)

    Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique

    2017-08-01

    NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.

  17. Frequency-resolved Monte Carlo.

    PubMed

    López Carreño, Juan Camilo; Del Valle, Elena; Laussy, Fabrice P

    2018-05-03

    We adapt the Quantum Monte Carlo method to the cascaded formalism of quantum optics, allowing us to simulate the emission of photons of known energy. Statistical processing of the photon clicks thus collected agrees with the theory of frequency-resolved photon correlations, extending the range of applications based on correlations of photons of prescribed energy, in particular those of a photon-counting character. We apply the technique to autocorrelations of photon streams from a two-level system under coherent and incoherent pumping, including the Mollow triplet regime where we demonstrate the direct manifestation of leapfrog processes in producing an increased rate of two-photon emission events.

  18. Benchmarking the pseudopotential and fixed-node approximations in diffusion Monte Carlo calculations of molecules and solids

    DOE PAGES

    Nazarov, Roman; Shulenburger, Luke; Morales, Miguel A.; ...

    2016-03-28

    We performed diffusion Monte Carlo (DMC) calculations of the spectroscopic properties of a large set of molecules, assessing the effect of different approximations. In systems containing elements with large atomic numbers, we show that the errors associated with the use of nonlocal mean-field-based pseudopotentials in DMC calculations can be significant and may surpass the fixed-node error. In conclusion, we suggest practical guidelines for reducing these pseudopotential errors, which allow us to obtain DMC-computed spectroscopic parameters of molecules and equation of state properties of solids in excellent agreement with experiment.

  19. Benchmarking the pseudopotential and fixed-node approximations in diffusion Monte Carlo calculations of molecules and solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazarov, Roman; Shulenburger, Luke; Morales, Miguel A.

    We performed diffusion Monte Carlo (DMC) calculations of the spectroscopic properties of a large set of molecules, assessing the effect of different approximations. In systems containing elements with large atomic numbers, we show that the errors associated with the use of nonlocal mean-field-based pseudopotentials in DMC calculations can be significant and may surpass the fixed-node error. In conclusion, we suggest practical guidelines for reducing these pseudopotential errors, which allow us to obtain DMC-computed spectroscopic parameters of molecules and equation of state properties of solids in excellent agreement with experiment.

  20. Calculation of out-of-field dose distribution in carbon-ion radiotherapy by Monte Carlo simulation.

    PubMed

    Yonai, Shunsuke; Matsufuji, Naruhiro; Namba, Masao

    2012-08-01

    Recent radiotherapy technologies including carbon-ion radiotherapy can improve the dose concentration in the target volume, thereby not only reducing side effects in organs at risk but also the secondary cancer risk within or near the irradiation field. However, secondary cancer risk in the low-dose region is considered to be non-negligible, especially for younger patients. To achieve a dose estimation of the whole body of each patient receiving carbon-ion radiotherapy, which is essential for risk assessment and epidemiological studies, Monte Carlo simulation plays an important role because the treatment planning system can provide dose distribution only in∕near the irradiation field and the measured data are limited. However, validation of Monte Carlo simulations is necessary. The primary purpose of this study was to establish a calculation method using the Monte Carlo code to estimate the dose and quality factor in the body and to validate the proposed method by comparison with experimental data. Furthermore, we show the distributions of dose equivalent in a phantom and identify the partial contribution of each radiation type. We proposed a calculation method based on a Monte Carlo simulation using the PHITS code to estimate absorbed dose, dose equivalent, and dose-averaged quality factor by using the Q(L)-L relationship based on the ICRP 60 recommendation. The values obtained by this method in modeling the passive beam line at the Heavy-Ion Medical Accelerator in Chiba were compared with our previously measured data. It was shown that our calculation model can estimate the measured value within a factor of 2, which included not only the uncertainty of this calculation method but also those regarding the assumptions of the geometrical modeling and the PHITS code. Also, we showed the differences in the doses and the partial contributions of each radiation type between passive and active carbon-ion beams using this calculation method. These results indicated that it is essentially important to include the dose by secondary neutrons in the assessment of the secondary cancer risk of patients receiving carbon-ion radiotherapy with active as well as passive beams. We established a calculation method with a Monte Carlo simulation to estimate the distribution of dose equivalent in the body as a first step toward routine risk assessment and an epidemiological study of carbon-ion radiotherapy at NIRS. This method has the advantage of being verifiable by the measurement.

  1. The development and validation of a Monte Carlo model for calculating the out-of-field dose from radiotherapy treatments

    NASA Astrophysics Data System (ADS)

    Kry, Stephen

    Introduction. External beam photon radiotherapy is a common treatment for many malignancies, but results in the exposure of the patient to radiation away from the treatment site. This out-of-field radiation irradiates healthy tissue and may lead to the induction of secondary malignancies. Out-of-field radiation is composed of photons and, at high treatment energies, neutrons. Measurement of this out-of-field dose is time consuming, often difficult, and is specific to the conditions of the measurements. Monte Carlo simulations may be a viable approach to determining the out-of-field dose quickly, accurately, and for arbitrary irradiation conditions. Methods. An accelerator head, gantry, and treatment vault were modeled with MCNPX and 6 MV and 18 MV beams were simulated. Photon doses were calculated in-field and compared to measurements made with an ion chamber in a water tank. Photon doses were also calculated out-of-field from static fields and compared to measurements made with thermoluminescent dosimeters in acrylic. Neutron fluences were calculated and compared to measurements made with gold foils. Finally, photon and neutron dose equivalents were calculated in an anthropomorphic phantom following intensity-modulated radiation therapy and compared to previously published dose equivalents. Results. The Monte Carlo model was able to accurately calculate the in-field dose. From static treatment fields, the model was also able to calculate the out-of-field photon dose within 16% at 6 MV and 17% at 18 MV and the neutron fluence within 19% on average. From the simulated IMRT treatments, the calculated out-of-field photon dose was within 14% of measurement at 6 MV and 13% at 18 MV on average. The calculated neutron dose equivalent was much lower than the measured value but is likely accurate because the measured neutron dose equivalent was based on an overestimated neutron energy. Based on the calculated out-of-field doses generated by the Monte Carlo model, it was possible to estimate the risk of fatal secondary malignancy, which was consistent with previous estimates except for the neutron discrepancy. Conclusions. The Monte Carlo model developed here is well suited to studying the out-of-field dose equivalent from photons and neutrons under a variety of irradiation configurations, including complex treatments on complex phantoms. Based on the calculated dose equivalents, it is possible to estimate the risk of secondary malignancy associated with out-of-field doses. The Monte Carlo model should be used to study, quantify, and minimize the out-of-field dose equivalent and associated risks received by patients undergoing radiation therapy.

  2. Optimising probe holder design for sentinel lymph node imaging using clinical photoacoustic system with Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sivasubramanian, Kathyayini; Periyasamy, Vijitha; Wen, Kew Kok; Pramanik, Manojit

    2017-03-01

    Photoacoustic tomography is a hybrid imaging modality that combines optical and ultrasound imaging. It is rapidly gaining attention in the field of medical imaging. The challenge is to translate it into a clinical setup. In this work, we report the development of a handheld clinical photoacoustic imaging system. A clinical ultrasound imaging system is modified to integrate photoacoustic imaging with the ultrasound imaging. Hence, light delivery has been integrated with the ultrasound probe. The angle of light delivery is optimized in this work with respect to the depth of imaging. Optimization was performed based on Monte Carlo simulation for light transport in tissues. Based on the simulation results, the probe holders were fabricated using 3D printing. Similar results were obtained experimentally using phantoms. Phantoms were developed to mimic sentinel lymph node imaging scenario. Also, in vivo sentinel lymph node imaging was done using the same system with contrast agent methylene blue up to a depth of 1.5 cm. The results validate that one can use Monte Carlo simulation as a tool to optimize the probe holder design depending on the imaging needs. This eliminates a trial and error approach generally used for designing a probe holder.

  3. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    NASA Astrophysics Data System (ADS)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  4. Structure and electronic properties of azadirachtin.

    PubMed

    de Castro, Elton A S; de Oliveira, Daniel A B; Farias, Sergio A S; Gargano, Ricardo; Martins, João B L

    2014-02-01

    We performed a combined DFT and Monte Carlo (13)C NMR chemical-shift study of azadirachtin A, a triterpenoid that acts as a natural insect antifeedant. A conformational search using a Monte Carlo technique based on the RM1 semiempirical method was carried out in order to establish its preferred structure. The B3LYP/6-311++G(d,p), wB97XD/6-311++G(d,p), M06/6-311++G(d,p), M06-2X/6-311++G(d,p), and CAM-B3LYP/6-311++G(d,p) levels of theory were used to predict NMR chemical shifts. A Monte Carlo population-weighted average spectrum was produced based on the predicted Boltzmann contributions. In general, good agreement between experimental and theoretical data was obtained using both methods, and the (13)C NMR chemical shifts were predicted highly accurately. The geometry was optimized at the semiempirical level and used to calculate the NMR chemical shifts at the DFT level, and these shifts showed only minor deviations from those obtained following structural optimization at the DFT level, and incurred a much lower computational cost. The theoretical ultraviolet spectrum showed a maximum absorption peak that was mainly contributed by the tiglate group.

  5. Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali

    2007-01-01

    We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.

  6. Parallelized Monte Carlo software to efficiently simulate the light propagation in arbitrarily shaped objects and aligned scattering media.

    PubMed

    Zoller, Christian Johannes; Hohmann, Ansgar; Foschum, Florian; Geiger, Simeon; Geiger, Martin; Ertl, Thomas Peter; Kienle, Alwin

    2018-06-01

    A GPU-based Monte Carlo software (MCtet) was developed to calculate the light propagation in arbitrarily shaped objects, like a human tooth, represented by a tetrahedral mesh. A unique feature of MCtet is a concept to realize different kinds of light-sources illuminating the complex-shaped surface of an object, for which no preprocessing step is needed. With this concept, it is also possible to consider photons leaving a turbid media and reentering again in case of a concave object. The correct implementation was shown by comparison with five other Monte Carlo software packages. A hundredfold acceleration compared with central processing units-based programs was found. MCtet can simulate anisotropic light propagation, e.g., by accounting for scattering at cylindrical structures. The important influence of the anisotropic light propagation, caused, e.g., by the tubules in human dentin, is shown for the transmission spectrum through a tooth. It was found that the sensitivity to a change in the oxygen saturation inside the pulp for transmission spectra is much larger if the tubules are considered. Another "light guiding" effect based on a combination of a low scattering and a high refractive index in enamel is described. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  7. Astrocytic tracer dynamics estimated from [1-¹¹C]-acetate PET measurements.

    PubMed

    Arnold, Andrea; Calvetti, Daniela; Gjedde, Albert; Iversen, Peter; Somersalo, Erkki

    2015-12-01

    We address the problem of estimating the unknown parameters of a model of tracer kinetics from sequences of positron emission tomography (PET) scan data using a statistical sequential algorithm for the inference of magnitudes of dynamic parameters. The method, based on Bayesian statistical inference, is a modification of a recently proposed particle filtering and sequential Monte Carlo algorithm, where instead of preassigning the accuracy in the propagation of each particle, we fix the time step and account for the numerical errors in the innovation term. We apply the algorithm to PET images of [1-¹¹C]-acetate-derived tracer accumulation, estimating the transport rates in a three-compartment model of astrocytic uptake and metabolism of the tracer for a cohort of 18 volunteers from 3 groups, corresponding to healthy control individuals, cirrhotic liver and hepatic encephalopathy patients. The distribution of the parameters for the individuals and for the groups presented within the Bayesian framework support the hypothesis that the parameters for the hepatic encephalopathy group follow a significantly different distribution than the other two groups. The biological implications of the findings are also discussed. © The Authors 2014. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  8. Controlled pattern imputation for sensitivity analysis of longitudinal binary and ordinal outcomes with nonignorable dropout.

    PubMed

    Tang, Yongqiang

    2018-04-30

    The controlled imputation method refers to a class of pattern mixture models that have been commonly used as sensitivity analyses of longitudinal clinical trials with nonignorable dropout in recent years. These pattern mixture models assume that participants in the experimental arm after dropout have similar response profiles to the control participants or have worse outcomes than otherwise similar participants who remain on the experimental treatment. In spite of its popularity, the controlled imputation has not been formally developed for longitudinal binary and ordinal outcomes partially due to the lack of a natural multivariate distribution for such endpoints. In this paper, we propose 2 approaches for implementing the controlled imputation for binary and ordinal data based respectively on the sequential logistic regression and the multivariate probit model. Efficient Markov chain Monte Carlo algorithms are developed for missing data imputation by using the monotone data augmentation technique for the sequential logistic regression and a parameter-expanded monotone data augmentation scheme for the multivariate probit model. We assess the performance of the proposed procedures by simulation and the analysis of a schizophrenia clinical trial and compare them with the fully conditional specification, last observation carried forward, and baseline observation carried forward imputation methods. Copyright © 2018 John Wiley & Sons, Ltd.

  9. Monte Carlo modeling of the MammoSite(Reg) treatments: Dose effects of air pockets

    NASA Astrophysics Data System (ADS)

    Huang, Yu-Huei Jessica

    In the treatment of early-stage breast cancer, MammoSiteRTM has been used as one of the partial breast irradiation techniques after breast-conserving surgery. The MammoSiteRTM applicator is a single catheter with an inflatable balloon at its distal end that can be placed in the resected cavity (tumor bed). The treatment is performed by delivering the Ir-192 high-dose-rate source through the center lumen of the catheter by a remote afterloader while the balloon is inflated in the tumor bed cavity. In the MammoSiteRTM treatment, it has been found that air pockets occasionally exist and can be seen and measured in CT images. Experiences have shown that about 90% of the patients have air pockets when imaged two days after the balloon placement. The criterion for the air pocket volume is less than or equal to 10% of the planning target volume in volume. The purpose of this study is to quantify dose errors occurring at the interface of the air pocket in MammoSiteRTM treatments with Monte Carlo calculations, so that the dosimetric effects from the air pocket can be fully understood. Modern brachytherapy treatment planning systems typically consider patient anatomy as a homogeneous water medium, and incorrectly model lateral and backscatter radiation during treatment delivery. Heterogeneities complicate the problem and may result in overdosage to the tissue located near the medium interface. This becomes a problem in MammoSiteRTM brachytherapy when air pocket appears during the treatment. The resulting percentage dose difference near the air-tissue interface is hypothesized to be greater than 10% when comparing Monte Carlo N-Particle (version 5) with current treatment planning systems. The specific aims for this study are: (1) Validate Monte Carlo N-Particle (Version 5) source modeling. (2) Develop phantom. (3) Calculate phantom doses with Monte Carlo N-Particle (Version 5) and investigate doses difference between thermoluminescent dosimeter measurement, treatment planning system, and Monte Carlo results. (4) Calculate dose differences for various treatment parameters. The results from thermoliminescent dosimeter phantom measurements proves that with correct geometric and source models, Monte Carlo method can be used to estimate homogeneity and heterogeneity doses in MammoSiteRTM treatment. The resulting dose differences at various points of interests in Monte Carlo calculations were presented and compared between different calculation methods. The air pocket doses were found to be underestimated by the treatment planning system. It was concluded that after correcting for inverse square law, the underestimation error from the treatment planning system will be less than +/- 2.0%, and +/- 3.5%, at the air pocket surface and air pocket planning target volume, respectively, when comparing Monte Carlo N-Particle (version 5) results. If the skin surface is located close to the air pocket, the underestimation effect at the air pocket surface and air pocket planning target volume doses becomes less because the air outside of the skin surface reduces the air pocket inhomogeneity effect. In order to maintain appropriate skin dose within tolerance, the skin surface criterion should be considered as the smallest thickness of the breast tissue located between the air pocket and the skin surface. The thickness should be at least 5 mm. In conclusion, the air pocket outside the balloon had less than 10% inhomogeneity effect based on the situations studied. It is recommended that at least an inverse square correction should be taken into consideration in order to relate clinical outcomes to actual delivered doses to the air pocket and surrounding tissues.

  10. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    NASA Astrophysics Data System (ADS)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.

  11. SU-E-T-391: Assessment and Elimination of the Angular Dependence of the Response of the NanoDot OSLD System in MV Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehmann, J; University of Sydney, Sydney; RMIT University, Melbourne

    2014-06-01

    Purpose: Assess the angular dependence of the nanoDot OSLD system in MV X-ray beams at depths and mitigate this dependence for measurements in phantoms. Methods: Measurements for 6 MV photons at 3 cm and 10 cm depth and Monte Carlo simulations were performed. Two special holders were designed which allow a nanoDot dosimeter to be rotated around the center of its sensitive volume (5 mm diameter disk). The first holder positions the dosimeter disk perpendicular to the beam (en-face). It then rotates until the disk is parallel with the beam (edge on). This is referred to as Setup 1. Themore » second holder positions the disk parallel to the beam (edge on) for all angles (Setup 2). Monte Carlo simulations using GEANT4 considered detector and housing in detail based on microCT data. Results: An average drop in response by 1.4±0.7% (measurement) and 2.1±0.3% (Monte Carlo) for the 90° orientation compared to 0° was found for Setup 1. Monte Carlo simulations also showed a strong dependence of the effect on the composition of the sensitive layer. Assuming 100% active material (Al??O??) results in a 7% drop in response for 90° compared to 0°. Assuming the layer to be completely water, results in a flat response (within simulation uncertainty of about 1%). For Setup 2, measurements and Monte Carlo simulations found the angular dependence of the dosimeter to be below 1% and within the measurement uncertainty. Conclusion: The nanoDot dosimeter system exhibits a small angular dependence off approximately 2%. Changing the orientation of the dosimeter so that a coplanar beam arrangement always hits the detector material edge on reduces the angular dependence to within the measurement uncertainty of about 1%. This makes the dosimeter more attractive for phantom based clinical measurements and audits with multiple coplanar beams. The Australian Clinical Dosimetry Service is a joint initiative between the Australian Department of Health and the Australian Radiation Protection and Nuclear Safety Agency.« less

  12. A Lattice Kinetic Monte Carlo Solver for First-Principles Microkinetic Trend Studies

    DOE PAGES

    Hoffmann, Max J.; Bligaard, Thomas

    2018-01-22

    Here, mean-field microkinetic models in combination with Brønsted–Evans–Polanyi like scaling relations have proven highly successful in identifying catalyst materials with good or promising reactivity and selectivity. Analysis of the microkinetic model by means of lattice kinetic Monte Carlo promises a faithful description of a range of atomistic features involving short-range ordering of species in the vicinity of an active site. In this paper, we use the “fruit fly” example reaction of CO oxidation on fcc(111) transition and coinage metals to motivate and develop a lattice kinetic Monte Carlo solver suitable for the numerically challenging case of vastly disparate rate constants.more » As a result, we show that for the case of infinitely fast diffusion and absence of adsorbate-adsorbate interaction it is, in fact, possible to match the prediction of the mean-field-theory method and the lattice kinetic Monte Carlo method. As a corollary, we conclude that lattice kinetic Monte Carlo simulations of surface chemical reactions are most likely to provide additional insight over mean-field simulations if diffusion limitations or adsorbate–adsorbate interactions have a significant influence on the mixing of the adsorbates.« less

  13. A path integral methodology for obtaining thermodynamic properties of nonadiabatic systems using Gaussian mixture distributions

    NASA Astrophysics Data System (ADS)

    Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel

    2018-05-01

    We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.

  14. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo

    DOE PAGES

    McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai; ...

    2017-11-07

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less

  15. Monte Carlo: in the beginning and some great expectations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metropolis, N.

    1985-01-01

    The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conferencemore » was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences.« less

  16. Calculating Potential Energy Curves with Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Powell, Andrew D.; Dawes, Richard

    2014-06-01

    Quantum Monte Carlo (QMC) is a computational technique that can be applied to the electronic Schrödinger equation for molecules. QMC methods such as Variational Monte Carlo (VMC) and Diffusion Monte Carlo (DMC) have demonstrated the capability of capturing large fractions of the correlation energy, thus suggesting their possible use for high-accuracy quantum chemistry calculations. QMC methods scale particularly well with respect to parallelization making them an attractive consideration in anticipation of next-generation computing architectures which will involve massive parallelization with millions of cores. Due to the statistical nature of the approach, in contrast to standard quantum chemistry methods, uncertainties (error-bars) are associated with each calculated energy. This study focuses on the cost, feasibility and practical application of calculating potential energy curves for small molecules with QMC methods. Trial wave functions were constructed with the multi-configurational self-consistent field (MCSCF) method from GAMESS-US.[1] The CASINO Monte Carlo quantum chemistry package [2] was used for all of the DMC calculations. An overview of our progress in this direction will be given. References: M. W. Schmidt et al. J. Comput. Chem. 14, 1347 (1993). R. J. Needs et al. J. Phys.: Condensed Matter 22, 023201 (2010).

  17. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less

  18. A Lattice Kinetic Monte Carlo Solver for First-Principles Microkinetic Trend Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffmann, Max J.; Bligaard, Thomas

    Here, mean-field microkinetic models in combination with Brønsted–Evans–Polanyi like scaling relations have proven highly successful in identifying catalyst materials with good or promising reactivity and selectivity. Analysis of the microkinetic model by means of lattice kinetic Monte Carlo promises a faithful description of a range of atomistic features involving short-range ordering of species in the vicinity of an active site. In this paper, we use the “fruit fly” example reaction of CO oxidation on fcc(111) transition and coinage metals to motivate and develop a lattice kinetic Monte Carlo solver suitable for the numerically challenging case of vastly disparate rate constants.more » As a result, we show that for the case of infinitely fast diffusion and absence of adsorbate-adsorbate interaction it is, in fact, possible to match the prediction of the mean-field-theory method and the lattice kinetic Monte Carlo method. As a corollary, we conclude that lattice kinetic Monte Carlo simulations of surface chemical reactions are most likely to provide additional insight over mean-field simulations if diffusion limitations or adsorbate–adsorbate interactions have a significant influence on the mixing of the adsorbates.« less

  19. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo.

    PubMed

    McDaniel, T; D'Azevedo, E F; Li, Y W; Wong, K; Kent, P R C

    2017-11-07

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.

  20. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.

    2017-11-01

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.

Top