For comprehensive and current results, perform a real-time search at Science.gov.

1

Monte Carlo simulation of X-ray spectra and evaluation of filter effect using MCNP4C and FLUKA code

The general-purpose MCNP4C and FLUKA codes were used for simulating X-ray spectra. The electrons were transported until they slow down and stop in the target. Both bremsstrahlung and characteristic X-ray production were considered in this work. Tungsten\\/aluminum combination was used as target\\/filter in the simulation. The results of two codes were generated in 80, 100, 120 and 140kV and compared

R. Taleei; M. Shahriari

2009-01-01

2

NASA Astrophysics Data System (ADS)

The energy dependence of the radiochromic film (RCF) response to beta-emitting sources was studied by dose theoretical calculations, employing the MCNP4C and EGSnrc/BEAMnrc Monte Carlo codes. Irradiations with virtual monochromatic electron sources, electron and photon clinical beams, a 32P intravascular brachytherapy (IVB) source and other beta-emitting radioisotopes (188Re, 90Y, 90Sr/90Y,32P) were simulated. The MD-55-2 and HS radiochromic films (RCFs) were considered, in a planar or cylindrical irradiation geometry, with water or polystyrene as the surrounding medium. For virtual monochromatic sources, a monotonic decrease with energy of the dose absorbed to the film, with respect to that absorbed to the surrounding medium, was evidenced. Considering the IVB 32P source and the MD-55-2 in a cylindrical geometry, the calibration with a 6 MeV electron beam would yield dose underestimations from 14 to 23%, increasing the source-to-film radial distance from 1 to 6 mm. For the planar beta-emitting sources in water, calibrations with photon or electron clinical beams would yield dose underestimations between 5 and 12%. Calibrating the RCF with 90Sr/90Y, the MD-55-2 would yield dose underestimations between 3 and 5% for 32P and discrepancies within ±2% for 188Re and 90Y, whereas for the HS the dose underestimation would reach 4% with 188Re and 6% with 32P.

Pacilio, M.; Aragno, D.; Rauco, R.; D'Onofrio, S.; Pressello, M. C.; Bianciardi, L.; Santini, E.

2007-07-01

3

Calculation of the store house worker dose in a lost wax foundry using MCNP-4C.

Lost wax casting is an industrial process which permits the transmutation into metal of models made in wax. The wax model is covered with a silicaceous shell of the required thickness and once this shell is built the set is heated and wax melted. Liquid metal is then cast into the shell replacing the wax. When the metal is cool, the shell is broken away in order to recover the metallic piece. In this process zircon sands are used for the preparation of the silicaceous shell. These sands have varying concentrations of natural radionuclides: 238U, 232Th and 235U together with their progenics. The zircon sand is distributed in bags of 50 kg, and 30 bags are on a pallet, weighing 1,500 kg. The pallets with the bags have dimensions 80 cm x 120 cm x 80 cm, and constitute the radiation source in this case. The only pathway of exposure to workers in the store house is external radiation. In this case there is no dust because the bags are closed and covered by plastic, the store house has a good ventilation rate and so radon accumulation is not possible. The workers do not touch with their hands the bags and consequently skin contamination will not take place. In this study all situations of external irradiation to the workers have been considered; transportation of the pallets from vehicle to store house, lifting the pallets to the shelf, resting of the stock on the shelf, getting down the pallets, and carrying the pallets to production area. Using MCNP-4C exposure situations have been simulated, considering that the source has a homogeneous composition, the minimum stock in the store house is constituted by 7 pallets, and the several distances between pallets and workers when they are at work. The photons flux obtained by MCNP-4C is multiplied by the conversion factor of Flux to Kerma for air by conversion factor to Effective Dose by Kerma unit, and by the number of emitted photons. Those conversion factors are obtained of ICRP 74 table 1 and table 17 respectively. This is the way to obtain a function giving dose rate around the source. PMID:16604600

Alegría, Natalia; Legarda, Fernando; Herranz, Margarita; Idoeta, Raquel

2005-01-01

4

Determination of {beta}{sub eff} using MCNP-4C2 and application to the CROCUS and PROTEUS reactors

A new Monte Carlo method for the determination of {beta}{sub eff} has been recently developed and tested using appropriate models of the experimental reactors CROCUS and PROTEUS. The current paper describes the applied methodology and highlights the resulting improvements compared to the simplest MCNP approach, i.e. the 'prompt method' technique. In addition, the flexibility advantages of the developed method are presented. Specifically, the possibility to obtain the effective delayed neutron fraction {beta}{sub eff} per delayed neutron group, per fissioning nuclide and per reactor region is illustrated. Finally, the MCNP predictions of {beta}{sub eff} are compared to the results of deterministic calculations. (authors)

Vollaire, J. [European Organization for Nuclear Research CERN, CH-1211 Geneve 23 (Switzerland); Plaschy, M.; Jatuff, F. [Paul Scherrer Institut PSI, CH-5232 Villigen PSI (Switzerland); Chawla, R. [Paul Scherrer Institut PSI, CH-5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne EPFL, CH-1015 Lausanne (Switzerland)

2006-07-01

5

Monte Carlo methods Sequential Monte Carlo

Monte Carlo methods Sequential Monte Carlo A. Doucet Carcans Sept. 2011 A. Doucet (MLSS Sept. 2011) Sequential Monte Carlo Sept. 2011 1 / 85 #12;Generic Problem Consider a sequence of probability distributions, Fn = Fn 1 F. A. Doucet (MLSS Sept. 2011) Sequential Monte Carlo Sept. 2011 2 / 85 #12;Generic Problem

Doucet, Arnaud

6

In this chapter we discuss Monte Carlo sampling methods for solving large scale stochastic programming problems. We concentrate on the “exterior” approach where a random sample is generated outside of an optimization procedure, and then the constructed, so-called sample average approximation (SAA), problem is solved by an appropriate deterministic algorithm. We study statistical properties of the obtained SAA estimators. The

Alexander Shapiro

2003-01-01

7

Monte Carlo methods Monte Carlo Principle and MCMC

Monte Carlo methods Monte Carlo Principle and MCMC A. Doucet Carcans Sept. 2011 A. Doucet (MLSS Sept. 2011) MCMC Sept. 2011 1 / 91 #12;Overview of the Lectures 1 Monte Carlo Principles A. Doucet (MLSS Sept. 2011) MCMC Sept. 2011 2 / 91 #12;Overview of the Lectures 1 Monte Carlo Principles 2 Markov

Doucet, Arnaud

8

Shell model Monte Carlo methods

We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

Koonin, S.E. [California Inst. of Tech., Pasadena, CA (United States). W.K. Kellogg Radiation Lab.; Dean, D.J. [Oak Ridge National Lab., TN (United States)

1996-10-01

9

Monte Carlo Methods for Inference and Learning

Monte Carlo Methods for Inference and Learning Guest Lecturer: Ryan Adams CSC 2535 http://www.cs.toronto.edu/~rpa #12;Overview Â·Monte Carlo basics Â·Rejection and Importance sampling Â·Markov chain Monte Carlo Â·Metropolis-Hastings and Gibbs sampling Â·Slice sampling Â·Hamiltonian Monte Carlo #12;Computing Expectations We

Hinton, Geoffrey E.

10

Advanced Monte Carlo Methods: American Options

doesn't fit well with Monte Carlo methods which go forwards in time American options Â p. 4 #12;ProblemAdvanced Monte Carlo Methods: American Options Prof. Mike Giles mike.giles@maths.ox.ac.uk Oxford for Monte Carlo methods is the accurate and efficient pricing of options with optional early exercise

Giles, Mike

11

Advanced Monte Carlo Methods: American Options

backwards in time doesn't fit well with Monte Carlo methods which go forwards in time American options Â pAdvanced Monte Carlo Methods: American Options Prof. Mike Giles mike.giles@maths.ox.ac.uk Oxford for Monte Carlo methods is the accurate and efficient pricing of options with optional early exercise

Giles, Mike

12

Secondproofs Monte Carlo and Quasi-Monte Carlo Methods 2008

Pierre L'Ecuyer r Art B. Owen Editors Monte Carlo and Quasi-Monte Carlo Methods 2008 #12;Secondproofs lecuyer@iro.umontreal.ca Art B. Owen Department of Statistics Stanford University Sequoia Hall Stanford, CA 94305 USA owen@stanford.edu ISBN 978-3-642-04106-8 DOI 10.1007/978-3-642-04107-5 e-ISBN978

L'Ecuyer, Pierre

13

Population Monte Carlo Methods/OFPR/CREST/May 5, 2003 1 Population Monte Carlo Methods

Population Monte Carlo Methods/OFPR/CREST/May 5, 2003 1 Population Monte Carlo Methods Christian P. Robert UniversitÂ´e Paris Dauphine #12;Population Monte Carlo Methods/OFPR/CREST/May 5, 2003 2 1 A Benchmark example #12;Population Monte Carlo Methods/OFPR/CREST/May 5, 2003 3 Even simple models may lead

Robert, Christian P.

14

Monte Carlo Methods in Statistics Christian Robert

Monte Carlo Methods in Statistics Christian Robert UniversitÂ´e Paris Dauphine and CREST, INSEE September 2, 2009 Monte Carlo methods are now an essential part of the statistician's toolbox, to the point! We recall in this note some of the advances made in the design of Monte Carlo techniques towards

Boyer, Edmond

15

MONTE CARLO METHOD AND SENSITIVITY ESTIMATIONS

MONTE CARLO METHOD AND SENSITIVITY ESTIMATIONS A. de Lataillade a;#3; , S. Blanco b , Y. Clergent b on a formal basis and simple radiative transfer examples are used for illustration. Key words: Monte Carlo submitted to Elsevier Science 18 February 2002 #12; 1 Introduction Monte Carlo methods are commonly used

Dufresne, Jean-Louis

16

NASA Astrophysics Data System (ADS)

The neutron transport code, Monte Carlo N-Particle (MCNP) which was wellkown as the gold standard in predicting nuclear reaction was used to model the small nuclear reactor core called "U-batteryTM", which was develop by the University of Manchester and Delft Institute of Technology. The paper introduces on the concept of modeling the small reactor core, a high temperature reactor (HTR) type with small coated TRISO fuel particle in graphite matrix using the MCNPv4C software. The criticality of the core were calculated using the software and analysed by changing key parameters such coolant type, fuel type and enrichment levels, cladding materials, and control rod type. The criticality results from the simulation were validated using the SCALE 5.1 software by [1] M Ding and J L Kloosterman, 2010. The data produced from these analyses would be used as part of the process of proposing initial core layout and a provisional list of materials for newly design reactor core. In the future, the criticality study would be continued with different core configurations and geometries.

Pauzi, A. M.

2013-06-01

17

Monte Carlo Methods in Reactor Physics

Two approaches exist for particle transport simulation in reactor physics, deterministic and statistical Monte Carlo. The Monte Carlo and deterministic approaches are compared, and their advantages and disadvantages are discussed. Then different issues related to Monte Carlo simulations for solving different types of problems are described, along with methods to resolve some of the issues; these include variance-reduction techniques, automated variance techniques, and parallel computing. Then a few sample examples for real-life problems are presented. In the author's opinion, there are effective variance-reduction techniques and automation tools for the fixed-source simulations. This, however, is not true for the Monte Carlo eigenvalue calculations. The needs in this area are development of methods for determination of a ''good'' starting source and variance-reduction methods for effective sampling of source energies and regions. This is especially important because of emerging new applications including Monte Carlo depletion in general; Generation VI reactor design, which may involve irregular geometries and novel concepts; design and analyses for plutonium disposition; spent-fuel storage; radioactive waste disposal; and criticality safety evaluation of nuclear material handling facilities. The author believe that to make the Monte Carlo methods more effective and reliable, the use of deterministic methods is a must.

Haghighat, Alireza

2001-06-17

18

MONTE CARLO METHODS IN GEOPHYSICAL INVERSE Malcolm Sambridge

MONTE CARLO METHODS IN GEOPHYSICAL INVERSE PROBLEMS Malcolm Sambridge Research School Earth 27 2000; revised 15 December 2001; accepted 9 September published 5 December Monte Carlo inversion encountered in exploration seismology. traces development application Monte Carlo methods inverse problems

Sambridge, Malcolm

19

Monte Carlo methods for fissured porous media: gridless approaches

Monte Carlo methods for fissured porous media: gridless approaches Antoine Lejay1, -- Projet OMEGA (INRIA / Institut Â´Elie Cartan, Nancy) Abstract: In this article, we present two Monte Carlo methods) Published in Monte Carlo Methods and Applications. Proc. of the IV IMACS Seminar on Monte Carlo Methods

Paris-Sud XI, UniversitÃ© de

20

The Monte Carlo Method and Software Reliability Theory

1 The Monte Carlo Method and Software Reliability Theory Brian Korver1 briank@cs.pdx.edu TR 94-1. February 18, 1994 1.0 Abstract The Monte Carlo method of reliability prediction is useful when system for valid, nontrivial input data and an external oracle. 2.0 The Monte Carlo Method The Monte Carlo method

Pratt, Vaughan

21

Diffusion Monte Carlo Method on Curved Manifolds

We present a stochastic approach to solving the many-body Schrödinger equation on curved manifolds with general metric. The method is based on the Diffusion Monte Carlo (DMC) technique, modified to include `quantum corrections' into the propagator which appear due to the curvature. As an illustration of our method we apply it to the quantum Hall effect, using the spherical geometry

V. Melik-Alaverdian; N. E. Bonesteel; G. Ortiz

1997-01-01

22

Optimizing Efficiency of Perturbative Monte Carlo Method

-- --Efficiency of Perturbative Monte Carlo Method TOM J. EVANS, THANH N. TRUONG approach introduced earlier provides a means to reduce the number of full SCF calculations in simulations displacements to occur between the performance of the full self-consistent field calculations. This will allow

Truong, Thanh N.

23

Monte Carlo simulation of core physics parameters of the Nigeria Research Reactor1 (NIRR-1)

The Monte Carlo N-Particle (MCNP) code, version 4C (MCNP4C) and a set of neutron cross-section data were used to develop an accurate three-dimensional computational model of the Nigeria Research Reactor-1 (NIRR-1). The geometry of the reactor core was modeled as closely as possible including the details of all the fuel elements, reactivity regulators, the control rod, all irradiation channels, and

S. A. Jonah; J. R. Liaw; J. E. Matos

2007-01-01

24

A Monte Carlo method for solving unsteady adjoint equations

A Monte Carlo method for solving unsteady adjoint equations Qiqi Wang a,*, David Gleich a , Amin on this technique and uses a Monte Carlo linear solver. The Monte Carlo solver yields a forward-time algorithm' equation, the Monte Carlo approach is faster for a large class of problems while preserving sufficient

Wang, Qiqi

25

Sequential Monte Carlo Methods for Statistical Analysis of Tables

Sequential Monte Carlo Methods for Statistical Analysis of Tables Yuguo CHEN, Persi DIACONIS, Susan- butions. Our method produces Monte Carlo samples that are remarkably close to the uniform distribution. Our method compares favorably with other existing Monte Carlo- based algorithms, and sometimes

Liu, Jun

26

Advanced Monte Carlo Methods: IV Prof. Mike Giles

which go forwards in time Advanced Monte Carlo Methods: IV Â p. 4 #12;Problem Formulation FollowingAdvanced Monte Carlo Methods: IV Prof. Mike Giles mike.giles@maths.ox.ac.uk Oxford University Mathematical Institute Advanced Monte Carlo Methods: IV Â p. 1 #12;Early Exercise Perhaps the biggest challenge

Giles, Mike

27

Monte Carlo Methods: A Computational Pattern for Our Pattern Language

Monte Carlo Methods: A Computational Pattern for Our Pattern Language Jike Chong University@eecs.berkeley.edu Kurt Keutzer University of California, Berkeley keutzer@eecs.berkeley.edu ABSTRACT The Monte Carlo for a particular data working set. This paper presents the Monte Carlo Methods software pro- gramming pattern

California at Berkeley, University of

28

MONTE CARLO ANALYSIS: ESTIMATING GPP WITH THE CANOPY CONDUCTANCE METHOD

MONTE CARLO ANALYSIS: ESTIMATING GPP WITH THE CANOPY CONDUCTANCE METHOD 1. Overview A novel method performed a Monte Carlo Analysis to investigate the power of our statistical approach: i.e. what and Assumptions The Monte Carlo Analysis was performed as follows: Â· Natural variation. The only study to date

DeLucia, Evan H.

29

MONTE CARLO METHODS IN GEOPHYSICAL INVERSE Malcolm Sambridge

MONTE CARLO METHODS IN GEOPHYSICAL INVERSE PROBLEMS Malcolm Sambridge Research School of Earth 2002. [1] Monte Carlo inversion techniques were first used by Earth scientists more than 30 years ago in exploration seismology. This pa- per traces the development and application of Monte Carlo methods for inverse

Sambridge, Malcolm

30

Monte Carlo methods: Application to hydrogen gas and hard spheres

Quantum Monte Carlo (QMC) methods are among the most accurate for computing ground state properties of quantum systems. The two major types of QMC we use are Variational Monte Carlo (VMC), which evaluates integrals arising from the variational principle, and Diffusion Monte Carlo (DMC), which stochastically projects to the ground state from a trial wave function. These methods are applied

Mark Douglas Dewing

2001-01-01

31

MCMs: Early History and The Basics Monte Carlo Methods

The Name: Ulam's uncle would borrow money from the family by saying that "I just have to go to Monte CarloMCMs: Early History and The Basics Monte Carlo Methods: Early History and The Basics Prof. Michael of Probability Theory and Monte Carlo Methods Early History of Probability Theory The Stars Align at Los Alamos

Mascagni, Michael

32

A Comparison of Monte-Carlo Methods for Phantom Go

Throughout recent years, Monte-Carlo methods have considerably improved computer Go pro- grams. In particular, Monte-Carlo Tree Search algorithms such as UCT have enabled significant advances in this domain. Phantom Go is a variant of Go which is complicated by the condi- tion of imperfect information. This article compares four Monte-Carlo methods for Phantom Go in a self-play experiment: (1) Monte-Carlo

Joris Borsboom; Jahn-Takeshi Saito; Guillaume Chaslot; Jos W. H. M. Uiterwijk

33

Methods for Monte Carlo simulations of biomacromolecules

The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies. PMID:20428473

Vitalis, Andreas; Pappu, Rohit V.

2010-01-01

34

4 Monte Carlo Methods in Classical Statistical Physics

4 Monte Carlo Methods in Classical Statistical Physics Wolfhard Janke Institut fÂ¨ur Theoretische update algorithms (Metropolis, heat-bath, Glauber). Then methods for the statistical analysis of the thus Carlo Methods in Classical Statistical Physics, Lect. Notes Phys. 739, 79Â140 (2008) DOI 10

Janke, Wolfhard

35

Markov Chain Monte Carlo Methods for Statistical Inference

SUMMARY These notes provide an introduction to Markov chain Monte Carlo methods and their applications to both Bayesian and frequentist statistical inference. Such methods have revolutionized what can be achieved computationally, es- pecially in the Bayesian paradigm. The account begins by discussing ordi- nary Monte Carlo methods: these have the same goals as the Markov chain versions but can only

Julian Besag

36

A Particle Population Control Method for Dynamic Monte Carlo

NASA Astrophysics Data System (ADS)

A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems.

Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony

2014-06-01

37

Vectorized Monte Carlo methods for reactor lattice analysis

NASA Technical Reports Server (NTRS)

Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

Brown, F. B.

1984-01-01

38

Quantum Monte Carlo method for attractive Coulomb potentials

Starting from an exact lower bound on the imaginary-time propagator, we\\u000apresent a Path-Integral Quantum Monte Carlo method that can handle singular\\u000aattractive potentials. We illustrate the basic ideas of this Quantum Monte\\u000aCarlo algorithm by simulating the ground state of hydrogen and helium.

J. S. Kole; H. De Raedt

2001-01-01

39

Quantum Monte Carlo method for attractive Coulomb potentials

Starting from an exact lower bound on the imaginary-time propagator, we present a path-integral quantum Monte Carlo method that can handle singular attractive potentials. We illustrate the basic ideas of this quantum Monte Carlo algorithm by simulating the ground state of hydrogen and helium.

J. S. Kole; H. de Raedt

2001-01-01

40

Quantum Monte Carlo Methods for Strongly Correlated Electron Systems

We review some of the recent development in quantum Monte Carlo (QMC) methods for models of strongly correlated electron systems.\\u000a QMC is a promising general theoretical tool to study many-body systems, and has been widely applied in areas spanning condensed-matter,\\u000a high-energy, and nuclear physics. Recent progress has included two new methods, the ground-state and finite-temperature constrained\\u000a path Monte Carlo methods.

Shiwei Zhang

41

Computational Physics Resources: Basic Monte Carlo Methods

NSDL National Science Digital Library

This website contains a set of 7 simulations and accompanying worksheets that introduce a number of basic Monte Carlo techniques (e.g. generating and testing random sequences, simulating random walks and radioactive decay, and sampling according to a given distribution).

Wheaton, Spencer

2014-01-18

42

Low variance methods for Monte Carlo simulation of phonon transport

Computational studies in kinetic transport are of great use in micro and nanotechnologies. In this work, we focus on Monte Carlo methods for phonon transport, intended for studies in microscale heat transfer. After reviewing ...

Péraud, Jean-Philippe M. (Jean-Philippe Michel)

2011-01-01

43

Monte Carlo methods and applications in nuclear physics

Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

Carlson, J.

1990-01-01

44

Monte Carlo Methods in Statistical Mechanics: Foundations and New Algorithms

IntroductionThe goal of these lectures is to give an introduction to current research on MonteCarlo methods in statistical mechanics and quantum field theory, with an emphasis on:1) the conceptual foundations of the method, including the possible dangers andmisuses, and the correct use of statistical error analysis; and2) new Monte Carlo algorithms for problems in critical phenomena and quantumfield theory, aimed

Alan D. Sokal

1996-01-01

45

Efficient Block Sampling Strategies for Sequential Monte Carlo Methods

Sequential Monte Carlo (SMC) methods are a powerful set of simulation-based techniques for sampling sequentially from a sequence of complex probability distribu- tions. These methods rely on a combination of importance sampling and resampling techniques. In a Markov chain Monte Carlo (MCMC) framework, block sampling strate- gies often perform much better than algorithms based on one-at-a-time sampling strate- gies if

Arnaud Doucet; Mark Briers; Stéphane Sénécal

2006-01-01

46

Application of biasing techniques to the contributon Monte Carlo method

Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables.

Dubi, A.; Gerstl, S.A.W.

1980-01-01

47

Purpose: The accurate prediction of x-ray spectra under typical conditions encountered in clinical x-ray examination procedures and the assessment of factors influencing them has been a long-standing goal of the diagnostic radiology and medical physics communities. In this work, the influence of anode surface roughness on diagnostic x-ray spectra is evaluated using MCNP4C-based Monte Carlo simulations. Methods: An image-based modeling method was used to create realistic models from surface-cracked anodes. An in-house computer program was written to model the geometric pattern of cracks and irregularities from digital images of focal track surface in order to define the modeled anodes into MCNP input file. To consider average roughness and mean crack depth into the models, the surface of anodes was characterized by scanning electron microscopy and surface profilometry. It was found that the average roughness (R{sub a}) in the most aged tube studied is about 50 {mu}m. The correctness of MCNP4C in simulating diagnostic x-ray spectra was thoroughly verified by calling its Gaussian energy broadening card and comparing the simulated spectra with experimentally measured ones. The assessment of anode roughness involved the comparison of simulated spectra in deteriorated anodes with those simulated in perfectly plain anodes considered as reference. From these comparisons, the variations in output intensity, half value layer (HVL), heel effect, and patient dose were studied. Results: An intensity loss of 4.5% and 16.8% was predicted for anodes aged by 5 and 50 {mu}m deep cracks (50 kVp, 6 deg. target angle, and 2.5 mm Al total filtration). The variations in HVL were not significant as the spectra were not hardened by more than 2.5%; however, the trend for this variation was to increase with roughness. By deploying several point detector tallies along the anode-cathode direction and averaging exposure over them, it was found that for a 6 deg. anode, roughened by 50 {mu}m deep cracks, the reduction in exposure is 14.9% and 13.1% for 70 and 120 kVp tube voltages, respectively. For the evaluation of patient dose, entrance skin radiation dose was calculated for typical chest x-ray examinations. It was shown that as anode roughness increases, patient entrance skin dose decreases averagely by a factor of 15%. Conclusions: It was concluded that the anode surface roughness can have a non-negligible effect on output spectra in aged x-ray imaging tubes and its impact should be carefully considered in diagnostic x-ray imaging modalities.

Mehranian, A.; Ay, M. R.; Alam, N. Riyahi; Zaidi, H. [Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, P.O. Box 14155-6447, Tehran (Iran, Islamic Republic of) and Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, P.O. Box 14185-615, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, P.O. Box 14155-6447, Tehran (Iran, Islamic Republic of); Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, P.O. Box 14185-615, Tehran (Iran, Islamic Republic of) and Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, P.O. Box 14155-6447, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, P.O. Box 14155-6447, Tehran (Iran, Islamic Republic of); Division of Nuclear Medicine, Geneva University Hospital, CH-1211 Geneva (Switzerland) and Geneva Neuroscience Center, Geneva University, CH-1205 Geneva (Switzerland)

2010-02-15

48

Instantons and Monte Carlo Methods in Quantum Mechanics

In these lectures we describe the use of Monte Carlo simulations in understanding the role of tunneling events, instantons, in a quantum mechanical toy model. We study, in particular, a variety of methods that have been used in the QCD context, such as Monte Carlo simulations of the partition function, cooling and heating, the random and interacting instanton liquid model, and numerical simulations of non-Gaussian corrections to the semi-classical approximation.

Thomas Schaefer

2004-11-08

49

An assessment of the MCNP4C weight window

A new, enhanced weight window generator suite has been developed for MCNP version 4C. The new generator correctly estimates importances in either a user-specified, geometry-independent, orthogonal grid or in MCNP geometric cells. The geometry-independent option alleviates the need to subdivide the MCNP cell geometry for variance reduction purposes. In addition, the new suite corrects several pathologies in the existing MCNP weight window generator. The new generator is applied in a set of five variance reduction problems. The improved generator is compared with the weight window generator applied in MCNP4B. The benefits of the new methodology are highlighted, along with a description of its limitations. The authors also provide recommendations for utilization of the weight window generator.

Christopher N. Culbertson; John S. Hendricks

1999-12-01

50

The Monte Carlo method in quantum field theory

This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.

Colin Morningstar

2007-02-20

51

A Multivariate Time Series Method for Monte Carlo Reactor Analysis

A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor.

Taro Ueki

2008-08-14

52

On Monte Carlo methods for estimating ratios of normalizing constants

Recently, estimating ratios of normalizing constants has played an important role in Bayesian computations. Applications of estimating ratios of normalizing constants arise in many aspects of Bayesian statistical inference. In this article, we present an overview and discuss the current Monte Carlo methods for estimating ratios of normalizing constants. Then we propose a new ratio importance sampling method and establish

Ming-Hui Chen; Qi-Man Shao

1997-01-01

53

A Monte Carlo method to compute the exchange coefficient in the double porosity model

A Monte Carlo method to compute the exchange coefficient in the double porosity model Fabien: Monte Carlo methods, double porosity model, ran- dom walk on squares, fissured media AMS Classification: 76S05 (65C05 76M35) Published in Monte Carlo Methods Appl.. Proc. of Monte Carlo and probabilistic

Paris-Sud XI, UniversitÃ© de

54

The electron energy spectra, not connected to b-decay, of 235U- and 239Pu-films, irradiated by thermal neutrons, obtained by a Monte Carlo method is presented in the given work. The modelling was performed with the help of a computer code MCNP4C (Monte Carlo Neutron Photon transport code system), allowing to carry out the computer experiments on joint transport of neutrons, photons and electrons. The experiment geometry and the parameters of an irradiation were the same, as in [11] (the thickness of a foil varied only). As a result of computer experiments, the electron spectra was obtained for the samples of 235U, 239Pu and uranium dioxide of 93 % enrichment representing a set of films of 22 mm in diameter and different thickness: 0,001 mm, 0,005 mm, 0,02 mm, 0,01 mm, 0,1 mm, 1,0 mm; and also for the uranium dioxide film of 93 % enrichment (diameter 22 mm and thickness 0,01mm), located between two protective 0,025 mm aluminium disks (the conditions of experiment in [11]) and the electron spectrum was fixed at the output surface of a protective disk. The comparative analysis of the experimental [11] and calculated b--spectra is carried out.

V. D. Rusova; V. N. Pavlovychb; V. A. Tarasova; S. V. Iaroshenkob; D. A. Litvinova

2004-07-05

55

The electron energy spectra, not connected to b-decay, of 235U- and 239Pu-films, irradiated by thermal neutrons, obtained by a Monte Carlo method is presented in the given work. The modelling was performed with the help of a computer code MCNP4C (Monte Carlo Neutron Photon transport code system), allowing to carry out the computer experiments on joint transport of neutrons, photons and electrons. The experiment geometry and the parameters of an irradiation were the same, as in [11] (the thickness of a foil varied only). As a result of computer experiments, the electron spectra was obtained for the samples of 235U, 239Pu and uranium dioxide of 93 % enrichment representing a set of films of 22 mm in diameter and different thickness: 0,001 mm, 0,005 mm, 0,02 mm, 0,01 mm, 0,1 mm, 1,0 mm; and also for the uranium dioxide film of 93 % enrichment (diameter 22 mm and thickness 0,01mm), located between two protective 0,025 mm aluminium disks (the conditions of experiment in [11]) and the electron spectrum was fixed at...

Rusova, V D; Tarasova, V A; Iaroshenkob, S V; Litvinova, D A

2004-01-01

56

A new method to assess Monte Carlo convergence

The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1{radical}N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and {integral}{sub {minus}{infinity}}{sup {infinity}} f(x) dx = 1. Since f(x) is seldom known explicitly, Monte Carlo particle random walks sample f(x) implicitly. Unless there is a largest possible history score, the empirical f(x) must eventually decrease more steeply than l/x{sup 3} for the second moment ({integral}{sub {minus}{infinity}}{sup {infinity}} x{sup 2}f(x) dx) to exist.

Forster, R.A.; Booth, T.E.; Pederson, S.P.

1993-05-01

57

On the Gap-Tooth direct simulation Monte Carlo method

This thesis develops and evaluates Gap-tooth DSMC (GT-DSMC), a direct Monte Carlo simulation procedure for dilute gases combined with the Gap-tooth method of Gear, Li, and Kevrekidis. The latter was proposed as a means of ...

Armour, Jessica D

2012-01-01

58

MONTE CARLO METHODS IN FUZZY NON-LINEAR REGRESSION

We apply our new fuzzy Monte Carlo method to certain fuzzy non-linear regression problems to estimate the best solution. The best solution is a vector of triangular fuzzy numbers, for the fuzzy coefficients in the model, which minimizes an error measure. We use a quasi-random number generator to produce random sequences of these fuzzy vectors which uniformly fill the search

AREEG ABDALLA; JAMES BUCKLEY

2008-01-01

59

Forced Couette flow simulations using direct simulation Monte Carlo method

Three-dimensional unsteady flows between two infinite walls are simulated by using the direct simulation Monte Carlo (DSMC) method. An artificial forcing that mimics the centrifugal force in the Taylor problem has been applied to the flow. The sampled behaviors of the resulting flow, including the long time average and the disturbance components, are studied. The computations have been preformed using

William W. Liou; Yichuan Fang

2004-01-01

60

Bayesian methods, maximum entropy, and quantum Monte Carlo

We heuristically discuss the application of the method of maximum entropy to the extraction of dynamical information from imaginary-time, quantum Monte Carlo data. The discussion emphasizes the utility of a Bayesian approach to statistical inference and the importance of statistically well-characterized data. 14 refs.

Gubernatis, J.E.; Silver, R.N. (Los Alamos National Lab., NM (United States)); Jarrell, M. (Cincinnati Univ., OH (United States))

1991-01-01

61

Markov Chain Monte Carlo Methods in Biostatistics Andrew Gelman

Markov Chain Monte Carlo Methods in Biostatistics Andrew Gelman Department of Statistics Columbia May 21, 1996 1 Introduction Appropriate models in biostatistics are often quite complicated, re ecting in biostatistics. These readers can use this article as an introduction to the ways in which Markov chain Monte

Gelman, Andrew

62

Multicanonical multigrid Monte Carlo method and effective autocorellation time

We report tests of the recently proposed multicanonical multigrid Monte Carlo method for the two-dimensional $\\Phi^4$ field theory. Defining an effective autocorrelation time we obtain real time improvement factors of about one order of magnitude compared with standard multicanonical simulations.

W. Janke; T. Sauer

1993-12-09

63

Numerical Simulation of Lightning Location Based on Monte Carlo Method

Currently, lightning location system (LLS) is one of the important bases for urban lightning protection. A key factor of the TDOA technology prevalently adopted in LLS is the time error of detection stations. According to the stochastic property of the time error measured by detection stations, the lightning information containing error is simulated using Monte Carlo method. The location error

Z. X. Hu; Y. P. Wen; W. G. Zhao; H. P. Zhu; S. L. Liu

2009-01-01

64

Application of Monte Carlo methods in tomotherapy and radiation biophysics

Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution\\/superposition (C\\/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C\\/S methods, Monte Carlo (MC) simulations are too

Ya-Yun Hsiao

2008-01-01

65

A new method to assess Monte Carlo convergence

The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1[radical]N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and [integral][sub [minus][infinity

Forster, R.A.; Booth, T.E.; Pederson, S.P.

1993-01-01

66

Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems

NASA Astrophysics Data System (ADS)

This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It was found that for the current implementation of MCSA, both weak and strong scaling improved on that observed for production implementations of Krylov methods.

Slattery, Stuart R.

67

Monte Carlo simulations of fermion systems: the determinant method

Described are the details for performing Monte Carlo simulations on systems of fermions at finite temperatures by use of a technique called the Determinant Method. This method is based on a functional integral formulation of the fermion problem (Blankenbecler et al., Phys. Rev D 24, 2278 (1981)) in which the quartic fermion-fermion interactions that exist for certain models are transformed into bilinear ones by the introduction (J. Hirsch, Phys. Rev. B 28, 4059 (1983)) of Ising-like variables and an additional finite dimension. It is on the transformed problem the Monte Carlo simulations are performed. A brief summary of research on two such model problems, the spinless fermion lattice gas and the Anderson impurity problem, is also given.

Gubernatis, J.E.

1985-01-01

68

Statistical error of reactor calculations by the Monte Carlo method

Algorithms for calculating the statistical error with allowance for intergenerational correlations are described. The algorithms are constructed on the basis of statistical analysis of the results of computations by the Monte Carlo method. As a result, simple rules for choosing the parameters of the computational techniques, such as the number of simulated generations necessary for attaining the required accuracy and the number of first skipped generations, are elaborated.

Kalugin, M. A.; Oleynik, D. S.; Sukhino-Khomenko, E. A., E-mail: sukhino-khomenko@adis.vver.kiae.ru [Russian Research Centre Kurchatov Institute (Russian Federation)

2011-12-15

69

MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS

Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.

R. ESTEP; ET AL

2000-06-01

70

Oxidation of Cyclopentadiene by Quantum Monte Carlo Methods

NASA Astrophysics Data System (ADS)

Recent interest in the combustion of cyclopentadiene, a key element in automotive and industrial fuels, has led to investigation of some of the elementary reactions that arise in this complicated reaction system. Using the quantum Monte Carlo (QMC) method, heats of reaction and energy barriers to reaction have been computed. These quantities are compared with the results of several more commonly used techniques, including the density functional and coupled cluster methods. Emphasis will be placed on the capability of QMC to provide results of high accuracy for systems that pose a significant challenge for alternative methods.

Grossman, Jeffrey C.; Lester, William A., Jr.

1997-08-01

71

Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings

Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using Markov Chain Monte Carlo (MCMC) computational inference methods (Robert and Casella, 2005) instead

Columbia University

72

The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.

Baker, R.S.; Filippone, W.F. (Arizona Univ., Tucson, AZ (USA). Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. (Los Alamos National Lab., NM (USA))

1991-01-01

73

Multivariate Monte Carlo methods with clusters of galaxies

NASA Astrophysics Data System (ADS)

We describe a novel Monte Carlo approach to both spectral fitting and spatial/spectral inversion of X-ray astronomy data, and illustrate its application in the analysis of observations of clusters of galaxies. The X-ray events are directly compared with simulations using multivariate generalizations of the Kolmogorov-Smirnov and Cramér-von Mises statistic. We demonstrate this method in studying the soft X-ray spectra of cooling-flow clusters with the Reflection Grating Spectrometers (RGS) on the SMM-Newton observatory. We also show preliminary results on simultaneously inverting X-ray and interferometric microwave Sunyaev-Zeldovich cluster data using a Monte Carlo technique. Various techniques are applied to simulate radiative transfer effects, model spatially-resolved sources, and simulate instrument response. We then apply statistical tests in the multi-dimensional data space.

Peterson, J. R.; Jernigan, J. G.; Kahn, S. M.; Paerels, F. B. S.; Kaastra, J. S.; Miller, A.; Carlstrom, J.

74

Monte Carlo methods designed for parallel computation Sheldon B. Opps and Jeremy Scho eld

Monte Carlo methods designed for parallel computation Sheldon B. Opps and Jeremy Scho#12;eld of these methods is that individual Monte Carlo chains, which are run on a separate nodes, are coupled together- rate calculation, for example to improve the statistics of a Monte Carlo simulation, one inherent bene

Schofield, Jeremy

75

Monte-Carlo valorisation of American options: facts and new algorithms to improve existing methods

Monte-Carlo valorisation of American options: facts and new algorithms to improve existing methods is to discuss efficient algorithms for the pricing of American options by two recently proposed Monte-Carlo type the quantization approach, are performed. Key words: American Options, Monte Carlo methods. 1. Introduction

Boyer, Edmond

76

Monte Carlo Methods for Computation and Optimization (048715) Winter 2013/4

Monte Carlo Methods for Computation and Optimization (048715) Winter 2013/4 Lecture Notes Nahum Shimkin i #12;PREFACE These lecture notes are intended for a first, graduate-level, course on Monte-Carlo, Simulation and the Monte Carlo Method, Wiley, 2008. (2) S. Asmussen and P. Glynn, Stochastic Simulation

Shimkin, Nahum

77

Monte Carlo Methods for Pricing and Hedging American Options in High Dimension

Monte Carlo Methods for Pricing and Hedging American Options in High Dimension Lucia Caramellino1.zanette@uniud.it Summary. We numerically compare some recent Monte Carlo algorithms devoted to the pricing and hedging with respect to other Monte Carlo methods in terms of computing time. Here, we propose to suitably combine

Caramellino, Lucia

78

Investigating the Limits of Monte Carlo Tree Search Methods in Computer Go

Investigating the Limits of Monte Carlo Tree Search Methods in Computer Go Shih-Chieh Huang1, University of Alberta, Edmonton, Canada T6G 2E8 Abstract Monte Carlo Tree Search methods have led to huge- tion of simulations and tree search in Monte Carlo Tree Search is much stronger than either simulation

MÃ¼ller, Martin

79

A simple eigenfunction convergence acceleration method for Monte Carlo

Monte Carlo transport codes typically use a power iteration method to obtain the fundamental eigenfunction. The standard convergence rate for the power iteration method is the ratio of the first two eigenvalues, that is, k{sub 2}/k{sub 1}. Modifications to the power method have accelerated the convergence by explicitly calculating the subdominant eigenfunctions as well as the fundamental. Calculating the subdominant eigenfunctions requires using particles of negative and positive weights and appropriately canceling the negative and positive weight particles. Incorporating both negative weights and a {+-} weight cancellation requires a significant change to current transport codes. This paper presents an alternative convergence acceleration method that does not require modifying the transport codes to deal with the problems associated with tracking and cancelling particles of {+-} weights. Instead, only positive weights are used in the acceleration method.

Booth, Thomas E [Los Alamos National Laboratory

2010-11-18

80

Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination

Objective To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Methods Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or ?-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a 252Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV D–D neutron generator can create neutrons at up to 1013 n s?1 with current technology. All these enable an effective and low-cost method of killing anthrax spores. Results There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. Conclusion The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g 252Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV D–D neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV D–D neutron generator output >1013 n s?1 should be attainable in the near future. This indicates that we could use a D–D neutron generator to sterilise anthrax contamination within several seconds. PMID:22573293

Liu, B; Xu, J; Liu, T; Ouyang, X

2012-01-01

81

Eigenvalue analysis using a full-core Monte Carlo method

The reactor physics codes used at the Savannah River Site (SRS) to predict reactor behavior have been continually benchmarked against experimental and operational data. A particular benchmark variable is the observed initial critical control rod position. Historically, there has been some difficulty predicting this position because of the difficulties inherent in using computer codes to model experimental or operational data. The Monte Carlo method is applied in this paper to study the initial critical control rod positions for the SRS K Reactor. A three-dimensional, full-core MCNP model of the reactor was developed for this analysis.

Okafor, K.C.; Zino, J.F. (Westinghouse Savannah River Co., Aiken, SC (United States))

1992-01-01

82

Analysis of real-time networks with monte carlo methods

NASA Astrophysics Data System (ADS)

Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.

Mauclair, C.; Durrieu, G.

2013-12-01

83

Efficient Monte Carlo methods for continuum radiative transfer

We discuss the efficiency of Monte Carlo methods in solving continuum radiative transfer problems. The sampling of the radiation field and convergence of dust temperature calculations in the case of optically thick clouds are both studied. For spherically symmetric clouds we find that the computational cost of Monte Carlo simulations can be reduced, in some cases by orders of magnitude, with simple importance weighting schemes. This is particularly true for models consisting of cells of different sizes for which the run times would otherwise be determined by the size of the smallest cell. We present a new idea of extending importance weighting to scattered photons. This is found to be useful in calculations of scattered flux and could be important for three-dimensional models when observed intensity is needed only for one general direction of observations. Convergence of dust temperature calculations is studied for models with optical depths 10-10000. We examine acceleration methods where radiative interactions inside a cell or between neighbouring cells are treated explicitly. In optically thick clouds with strong self-coupling between dust temperatures the run times can be reduced by more than one order of magnitude. The use of a reference field was also examined. This eliminates the need for repeating simulation of constant sources (e.g., background radiation) after the first iteration and significantly reduces sampling errors. The applicability of the methods for three-dimensional models is discussed.

M. Juvela

2005-04-06

84

Monte Carlo methods for short polypeptides Jeremy Schofield a) and Mark A. Ratner

Monte Carlo methods for short polypeptides Jeremy Schofield a) and Mark A. Ratner Department! Nonphysical sampling Monte Carlo techniques that enable average structural properties of short in vacuo polypeptide chains to be calculated accurately are discussed. Updating algorithms developed for Monte Carlo

Schofield, Jeremy

85

Monte Carlo method for determining earthquake recurrence parameters from short paleoseismic paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach

86

ITER Neutronics Modeling Using Hybrid Monte Carlo/Deterministic and CAD-Based Monte Carlo Methods

The immense size and complex geometry of the ITER experimental fusion reactor require the development of special techniques that can accurately and efficiently perform neutronics simulations with minimal human effort. This paper shows the effect of the hybrid Monte Carlo (MC)/deterministic techniques - Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) - in enhancing the efficiency of the neutronics modeling of ITER and demonstrates the applicability of coupling these methods with computer-aided-design-based MC. Three quantities were calculated in this analysis: the total nuclear heating in the inboard leg of the toroidal field coils (TFCs), the prompt dose outside the biological shield, and the total neutron and gamma fluxes over a mesh tally covering the entire reactor. The use of FW-CADIS in estimating the nuclear heating in the inboard TFCs resulted in a factor of ~ 275 increase in the MC figure of merit (FOM) compared with analog MC and a factor of ~ 9 compared with the traditional methods of variance reduction. By providing a factor of ~ 21 000 increase in the MC FOM, the radiation dose calculation showed how the CADIS method can be effectively used in the simulation of problems that are practically impossible using analog MC. The total flux calculation demonstrated the ability of FW-CADIS to simultaneously enhance the MC statistical precision throughout the entire ITER geometry. Collectively, these calculations demonstrate the ability of the hybrid techniques to accurately model very challenging shielding problems in reasonable execution times.

Ibrahim, A. [University of Wisconsin; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL; Peplow, Douglas E. [ORNL; Sawan, M. [University of Wisconsin; Wilson, P. [University of Wisconsin; Wagner, John C [ORNL; Heltemes, Thad [University of Wisconsin, Madison

2011-01-01

87

Lattice Monte Carlo methods for systems far from equilibrium

We present a new numerical Monte Carlo approach to determine the scaling behavior of lattice field theories far from equilibrium. The presented methods are generally applicable to systems where classical-statistical fluctuations dominate the dynamics. As an example, these methods are applied to the random-force-driven one-dimensional Burgers' equation - a model for hydrodynamic turbulence. For a self-similar forcing acting on all scales the system is driven to a nonequilibrium steady state characterized by a Kolmogorov energy spectrum. We extract correlation functions of single- and multi-point quantities and determine their scaling spectrum displaying anomalous scaling for high-order moments. Varying the external forcing we are able to tune the system continuously from equilibrium, where the fluctuations are short-range correlated, to the case where the system is strongly driven in the infrared. In the latter case the nonequilibrium scaling of small-scale fluctuations are shown to be universal.

David Mesterházy; Luca Biferale; Karl Jansen; Raffaele Tripiccione

2013-11-18

88

Green's function Monte Carlo method with exact imaginary-time propagation

We present a general formulation of the Green's function Monte Carlo method in imaginary-time quantum Monte Carlo which employs exact propagators. This algorithm has no time-step errors and is obtained by minimal modifications of the time-independent Green's function Monte Carlo method. We describe how the method can be applied to the many-body Schroedinger equation, lattice Hamiltonians, and simple field theories.

K. E. Schmidt; Parhat Niyaz; A. Vaught; Michael A. Lee

2005-01-01

89

Green's function Monte Carlo method with exact imaginary-time propagation

We present a general formulation of the Green's function Monte Carlo method in imaginary-time quantum Monte Carlo which employs exact propagators. This algorithm has no time-step errors and is obtained by minimal modifications of the time-independent Green's function Monte Carlo method. We describe how the method can be applied to the many-body Schrödinger equation, lattice Hamiltonians, and simple field theories.

K. E. Schmidt; Parhat Niyaz; A. Vaught; Michael A. Lee

2005-01-01

90

Diagrammatic monte carlo method for many-polaron problems.

We introduce the first bold diagrammatic Monte Carlo approach to deal with polaron problems at a finite electron density nonperturbatively, i.e., by including vertex corrections to high orders. Using the Holstein model on a square lattice as a prototypical example, we demonstrate that our method is capable of providing accurate results in the thermodynamic limit in all regimes from a renormalized Fermi liquid to a single polaron, across the nonadiabatic region where Fermi and Debye energies are of the same order of magnitude. By accounting for vertex corrections, the accuracy of the theoretical description is increased by orders of magnitude relative to the lowest-order self-consistent Born approximation employed in most studies. We also find that for the electron-phonon coupling typical for real materials, the quasiparticle effective mass increases and the quasiparticle residue decreases with increasing the electron density at constant electron-phonon coupling strength. PMID:25361271

Mishchenko, Andrey S; Nagaosa, Naoto; Prokof'ev, Nikolay

2014-10-17

91

A wave-function Monte Carlo method for simulating conditional master equations

Wave-function Monte Carlo methods are an important tool for simulating quantum systems, but the standard method cannot be used to simulate decoherence in continuously measured systems. Here we present a new Monte Carlo method for such systems. This was used to perform the simulations of a continuously measured nano-resonator in [Phys. Rev. Lett. 102, 057208 (2009)].

Kurt Jacobs

2009-06-25

92

Development of Extended Reverse Monte Carlo Method for Analysis of 2D-USAXS Experimental Data

We developed an extended version of Reverse Monte Carlo analysis. This extended version of Reverse Monte Carlo method is intended to analyze the Two-dimensional Ultra-Small-Angle X-ray Scattering (2D-USAXS) experimental data for elongated rubbers. We first explain the theoretical background and examine the structure of a certain state of hard sphere system as the check of the validity of the extended version of Reverse Monte Carlo method.

Hagita, Katsumi; Okamoto, Haruo; Arai, Takashi [Department of Applied Physics, National Defense Academy, Yokosuka, 239-8686 (Japan); Kishimoto, Hiroyuki [SRI Research and Development Ltd., Kobe, 651-0071 (Japan); Umesaki, Norimasa [Japan Synchrotron Radiation Research Institute (JASRI), Hyogo, 651-5198 (Japan); Shinohara, Yuya; Amemiya, Yoshiyuki [Department of Advanced Materials Science, The University of Tokyo, Kashiwa, 277-8561 (Japan)

2006-05-05

93

Neutronic analysis code for fuel assembly using a vectorized Monte Carlo method

A fuel assembly analysis code, VMONT, in which a multigroup neutron transport calculation is combined with a burnup calculation, has been developed for comprehensive design work use. The neutron transport calculation is performed with a vectorized Monte Carlo method that can realize speeds {gt}10 times faster than those of a scalar Monte Carlo method. The validity of the VMONT code is shown through test calculations against continuous energy Monte Carlo calculations and the proteus tight lattice experiment.

Morimoto, Y.; Maruyama, H.; Ishii, K.; Aoyama, M. (Hitachi Ltd., Ibaraki (Japan). Energy Research Lab.)

1989-12-01

94

Monte Carlo methods for the solution of nonlinear partial differential equations

NASA Astrophysics Data System (ADS)

Stochastic models for the solution of nonlinear partial differential equations are discussed. They consist of a discretized version of these equations and Monte Carlo techniques. The Markov transitions are based on a priori estimates of the solution. To improve the efficiency of stochastic smoothers a Monte Carlo multigrid method is presented. The numerical results presented show the convergence of these methods. Some directions for the parallelization of the Monte Carlo algorithms presented are outlined. The techniques introduced make possible the extension of Monte Carlo methods to nonlinear problems, offering a new approach with an analytic potential for a wide range of problems in computational physics.

Marshall, Guillermo

1989-11-01

95

Optimization of the Beam Shaping Assembly (BSA) has been performed using the MCNP4C Monte Carlo code to shape the 2.45 MeV neutrons that are produced in the D-D neutron generator. Optimal design of the BSA has been chosen by considering in-air figures of merit (FOM) which consists of 70 cm Fluental as a moderator, 30 cm Pb as a reflector, 2mm (6)Li as a thermal neutron filter and 2mm Pb as a gamma filter. The neutron beam can be evaluated by in-phantom parameters, from which therapeutic gain can be derived. Direct evaluation of both set of FOMs (in-air and in-phantom) is very time consuming. In this paper a Response Matrix (RM) method has been suggested to reduce the computing time. This method is based on considering the neutron spectrum at the beam exit and calculating contribution of various dose components in phantom to calculate the Response Matrix. Results show good agreement between direct calculation and the RM method. PMID:23954283

Kasesaz, Y; Khalafi, H; Rahmani, F

2013-12-01

96

Seriation in Paleontological Data Using Markov Chain Monte Carlo Methods

Given a collection of fossil sites with data about the taxa that occur in each site, the task in biochronology is to find good estimates for the ages or ordering of sites. We describe a full probabilistic model for fossil data. The parameters of the model are natural: the ordering of the sites, the origination and extinction times for each taxon, and the probabilities of different types of errors. We show that the posterior distributions of these parameters can be estimated reliably by using Markov chain Monte Carlo techniques. The posterior distributions of the model parameters can be used to answer many different questions about the data, including seriation (finding the best ordering of the sites) and outlier detection. We demonstrate the usefulness of the model and estimation method on synthetic data and on real data on large late Cenozoic mammals. As an example, for the sites with large number of occurrences of common genera, our methods give orderings, whose correlation with geochronologic ages is 0.95. PMID:16477311

Puolamaki, Kai; Fortelius, Mikael; Mannila, Heikki

2006-01-01

97

ACOUSTIC NODE CALIBRATION USING HELICOPTER SOUNDS AND MONT E CARLO MARKOV CHAIN METHODS

ACOUSTIC NODE CALIBRATION USING HELICOPTER SOUNDS AND MONT Â´E CARLO MARKOV CHAIN METHODS Volkan Cevher and James H. McClellan Georgia Institute of Technology Atlanta, GA 30332-0250 {gte460q, james.mcclella}@ece.gatech.edu ABSTRACT A MontÂ´e-Carlo method is used to calibrate a randomly placed sen- sor node using helicopter sounds

Cevher, Volkan

98

NASA Astrophysics Data System (ADS)

This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.

Lodwick, Camille J.

99

Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters

Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

Owen, R.K.

1990-12-01

100

Quantum Monte Carlo methods and lithium cluster properties

Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

Owen, R.K.

1990-12-01

101

Monte Carlo methods for parallel processing of diffusion equations

A Monte Carlo algorithm for solving simple linear systems using a random walk is demonstrated and analyzed. The described algorithm solves for each element in the solution vector independently. Furthermore, it is demonstrated ...

Vafadari, Cyrus

2013-01-01

102

MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

A predictive screening model was developed for fate and transport of viruses in the unsaturated zone. A database of input parameters allowed Monte Carlo analysis with the model. The resulting kernel densities of predicted attenuation during percolation indicated very ...

103

Predicting three-body abrasive wear using Monte Carlo methods

Predicting wear of materials under three-body abrasion is a challenging project, since three-body abrasion is more complicated than two-body abrasion. In this paper, a Monte Carlo model for simulating plastic deformation wear rate, i.e. low-cycle fatigue wear rate, is proposed. The Manson–Coffin formula and the Palmgrom–Miner linear accumulated-damage principle were used in the model as well as the Monte Carlo

Liang Fang; Weimin Liu; Daoshan Du; Xiaofeng Zhang; Qunji Xue

2004-01-01

104

An asymptotic-preserving Monte Carlo method for the Boltzmann equation

NASA Astrophysics Data System (ADS)

In this work, we propose an asymptotic-preserving Monte Carlo method for the Boltzmann equation that is more efficient than the currently available Monte Carlo methods in the fluid dynamic regime. This method is based on the successive penalty method [39], which is an improved BGK-penalization method originally proposed by Filbet and Jin [16]. Here we introduce the Monte Carlo implementation of the method, which, despite its lower order accuracy, is very efficient in higher dimensions or simulating some complicated chemical processes. This method allows the time step independent of the mean free time which is prohibitively small in the fluid dynamic regime. We study some basic properties of this method, and compare it with some other asymptotic-preserving Monte Carlo methods in terms of numerical performance in different regimes, from rarefied to fluid dynamic regimes, and their computational efficiency.

Ren, Wei; Liu, Hong; Jin, Shi

2014-11-01

105

Review of quantum Monte Carlo methods and results for Coulombic systems

The various Monte Carlo methods for calculating ground state energies are briefly reviewed. Then a summary of the charged systems that have been studied with Monte Carlo is given. These include the electron gas, small molecules, a metal slab and many-body hydrogen.

Ceperley, D.

1983-01-27

106

Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford

examples multilevel Monte Carlo details of MATLAB code Mike Giles (Oxford) Monte Carlo methods May 30, determine the optimal allocation of computational effort on the different levels the savings can be much Y, has a mean-square-error with bound E Y - E[P] 2

Giles, Mike

107

Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford

with uncertainty examples multilevel Monte Carlo details of MATLAB code Mike Giles (Oxford) Monte Carlo methods May resolution in space (and time) as with SDEs, determine the optimal allocation of computational effort for which the multilevel estimator Y = L l=0 Y, has a mean-square-error with bound E Y - E[P] 2

Giles, Mike

108

Exact Monte Carlo Method for Continuum Fermion Systems

We offer a new proposal for the Monte Carlo treatment of many-fermion systems in continuous space. It is based upon diffusion Monte Carlo with significant modifications: correlated pairs of random walkers that carry opposite signs, different functions ''guide'' walkers of different signs, the Gaussians used for members of a pair are correlated, and walkers can cancel so as to conserve their expected future contributions. We report results for free-fermion systems and a fermion fluid with 14 {sup 3}He atoms, where it proves stable and correct. Its computational complexity grows with particle number, but slowly enough to make interesting physics within the reach of contemporary computers.

Kalos, M. H.; Pederiva, Francesco

2000-10-23

109

Detailed Heat Generation Simulations via the Monte Carlo Method

details of Joule heating in bulk silicon with Monte Carlo simulations including acoustic and optical and particularly relevant to the heating and reliability of nanoscale and thin-film transistors. Joule heating, as in most semiconductors, high-field Joule heating is typically dominated by optical phonon emission

Dutton, Robert W.

110

Atic Backscatter Study Using Monte Carlo Methods in Fluka & Root

A Monte Carlo analysis, based upon FLUKA, of neutron backscatter albedoes is presented using the ATIC balloon experiment as a study case. Preparation of the FLUKA input geometry has been accomplished by means of a new semi-automatic procedure for converting GEANT3 simulations. Resultant particle fluences (neutrons, photons, and charged particles) produced by incident Carbon nuclei striking ATIC with energies up

T. Wilson; L. Pinsky; A. Empl; K. Lee; V. Andersen; J. Isbert; J. Wefel; F. Carminati; A. Fasso; A. Ferrari; P. Sala; E. Futo; J. Ranft

2002-01-01

111

Exchange Monte Carlo Method and Application to Spin Glass Simulations

We propose an efficient Monte Carlo algorithm for simulating a ``hardly-relaxing'' system, in which many replicas with different temperatures are simultaneouslysimulated and a virtual process exchanging configurations of thesereplicas is introduced. This exchange process is expected to let the system at low temperatures escape from a local minimum. By using this algorithm the three-dimensional ±J Ising spin glass model is

Koji Hukushima; Koji Nemoto

1996-01-01

112

Monte Carlo Methods and Appl., Vol. 11, No. 1, pp. 39 Â 55 (2005) c VSP 2005 Grid-based Quasi-Monte -- In this paper, we extend the techniques used in Grid-based Monte Carlo appli- cations to Grid-based quasi-Monte in quasirandom sequences prevents us from applying many of our Grid-based Monte Carlo techniques to Grid- based

Li, Yaohang

113

A Monte Carlo method for the PDF equations of turbulent flow

A Monte Carlo method is presented which simulates the transport equations of joint probability density functions (pdf's) in turbulent flows. (Finite-difference solutions of the equations are impracticable, mainly because ...

Pope, S. B.

1980-01-01

114

NASA Technical Reports Server (NTRS)

The statistics are considered of the Monte Carlo method relative to the interpretation of the NUGAM2 and NUGAM3 computer code results. A numerical experiment using the NUGAM2 code is presented and the results are statistically interpreted.

Firstenberg, H.

1971-01-01

115

Efficient, automated Monte Carlo methods for radiation transport

Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.

Kong Rong; Ambrose, Martin [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Spanier, Jerome [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Beckman Laser Institute and Medical Clinic, University of California, 1002 Health Science Road E., Irvine, CA 92612 (United States)], E-mail: jspanier@uci.edu

2008-11-20

116

A Fast Indexing Method for Monte-Carlo Go

3×3 patterns are widely used in Monte-Carlo (MC) Go programs to improve the performance. In this paper, we propose a direct\\u000a indexing approach to build and use a complete 3×3 pattern library. The contents of the immediate 8 neighboring positions of\\u000a a board point are coded into a 16-bit string, called surrounding index. The surrounding indices of all board points

Keh-hsun Chen; Dawei Du; Peigang Zhang

2008-01-01

117

Kinetic Monte Carlo method for rule-based modeling of biochemical networks Jin Yang,1,* Michael I June 2008; published 10 September 2008 We present a kinetic Monte Carlo method for simulating chemical of simulation is O log2 M per reaction event for efficient ki- netic Monte Carlo KMC implementations 12

Faeder, Jim

118

Direct Monte Carlo simulation methods for nonreacting and reacting systems at fixed total internal; published 18 July 2002 A Monte Carlo computer simulation method is presented for directly performing. In this paper, we describe a methodology for performing Monte Carlo simulations at fixed U or at fixed H

Lisal, Martin

119

APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula

An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)

Hwang, M.; Bae, S.; Chung, B. D. [Korea Atomic Energy Research Inst., 150 Dukjin-dong, Yuseong-gu, Daejeon (Korea, Republic of)

2012-07-01

120

Green's function Monte Carlo method with exact imaginary-time propagation

NASA Astrophysics Data System (ADS)

We present a general formulation of the Green’s function Monte Carlo method in imaginary-time quantum Monte Carlo which employs exact propagators. This algorithm has no time-step errors and is obtained by minimal modifications of the time-independent Green’s function Monte Carlo method. We describe how the method can be applied to the many-body Schrödinger equation, lattice Hamiltonians, and simple field theories. Our modification of the Green’s function Monte Carlo algorithm is applied to the ground state of liquid 4He. We calculate the zero-temperature imaginary-time diffusion constant and relate that to the effective mass of a mass-four “impurity” atom in liquid 4He.

Schmidt, K. E.; Niyaz, Parhat; Vaught, A.; Lee, Michael A.

2005-01-01

121

Green's function Monte Carlo method with exact imaginary-time propagation

We present a general formulation of the Green's function Monte Carlo method in imaginary-time quantum Monte Carlo which employs exact propagators. This algorithm has no time-step errors and is obtained by minimal modifications of the time-independent Green's function Monte Carlo method. We describe how the method can be applied to the many-body Schroedinger equation, lattice Hamiltonians, and simple field theories. Our modification of the Green's function Monte Carlo algorithm is applied to the ground state of liquid {sup 4}He. We calculate the zero-temperature imaginary-time diffusion constant and relate that to the effective mass of a mass-four 'impurity' atom in liquid {sup 4}He.

Schmidt, K.E.; Niyaz, Parhat; Vaught, A.; Lee, Michael A. [Department of Physics and Astronomy, Arizona State University, Tempe, Arizona 85287 (United States); Department of Physics, Kent State University, Kent, Ohio 44242 (United States)

2005-01-01

122

Complex diffusion Monte-Carlo method: test by the simulation of the 2D fermions

On the base of the diffusion Monte-Carlo method we develop the method allowing to simulate the quantum systems with complex wave function. The method is exact and there are no approximations on the simulations of the module and the phase of the system's wave function. In our method averaged value of any quantity have no direct contribution from the phase

B. Abdullaev; M. Musakhanov; A. Nakamura

2001-01-01

123

ORIE 5582: Monte Carlo Methods in Financial Engineering This course covers the principles, 2009 Prerequisites ORIE 5581 (Monte Carlo Simulation) ORIE 5600 (Stochastic calculus) Instructor Peter books may prove helpful. Monte Carlo Methods in Financial Engineering. P. Glasserman. Springer

Keinan, Alon

124

Advanced computational methods for nodal diffusion, Monte Carlo, and S(sub N) problems

NASA Astrophysics Data System (ADS)

This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. An alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.

Martin, W. R.

1993-01-01

125

A New Monte Carlo Method for Time-Dependent Neutrino Radiation Transport

Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck & Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

Ernazar Abdikamalov; Adam Burrows; Christian D. Ott; Frank Löffler; Evan O'Connor; Joshua C. Dolence; Erik Schnetter

2012-03-13

126

Time-step limits for a Monte Carlo Compton-scattering method

We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.

Densmore, Jeffery D [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Lowrie, Robert B [Los Alamos National Laboratory

2009-01-01

127

In the present study, the energy dependence of response of some popular thermoluminescent dosemeters (TLDs) have been investigated such as LiF:Mg,Ti, LiF:Mg,Cu,P and CaSO(4):Dy to synchrotron radiation in the energy range of 10-34 keV. The study utilised experimental, Monte Carlo and analytical methods. The Monte Carlo calculations were based on the EGSnrc and FLUKA codes. The calculated energy response of all the TLDs using the EGSnrc and FLUKA codes shows excellent agreement with each other. The analytically calculated response shows good agreement with the Monte Carlo calculated response in the low-energy region. In the case of CaSO(4):Dy, the Monte Carlo-calculated energy response is smaller by a factor of 3 at all energies in comparison with the experimental response when polytetrafluoroethylene (PTFE) (75 % by wt) is included in the Monte Carlo calculations. When PTFE is ignored in the Monte Carlo calculations, the difference between the calculated and experimental response decreases (both responses are comparable >25 keV). For the LiF-based TLDs, the Monte Carlo-based response shows reasonable agreement with the experimental response. PMID:20308052

Bakshi, A K; Chatterjee, S; Palani Selvam, T; Dhabekar, B S

2010-07-01

128

A Monte Carlo Synthetic-Acceleration Method for Solving the Thermal Radiation Diffusion Equation

We present a novel synthetic-acceleration based Monte Carlo method for solving the equilibrium thermal radiation diusion equation in three dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that not only can our Monte Carlo method be an eective solver for sparse matrix systems, but also that it performs competitively with deterministic methods including preconditioned Conjugate Gradient while producing numerically identical results. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.

Evans, Thomas M [ORNL] [ORNL; Mosher, Scott W [ORNL] [ORNL; Slattery, Stuart [University of Wisconsin, Madison] [University of Wisconsin, Madison

2014-01-01

129

Methods for constraining zero-point energy in classical Monte Carlo transition-state theory

Two microcanonical sampling methods for constraining zero-point energy (ZPE) within classical Monte Carlo transition-state theory (MCTST) are described. Each is based on the efficient microcanonical sampling method [H. W. Schranz, S. Nordholm, and G. Nyman, J. Chem. Phys. 94, 1487 (1991)], with exclusion of phase space points not satisfying imposed ZPE constraints. Method 1 requires extensive sampling of phase space

Alison J. Marks

1998-01-01

130

Simulation of agglomeration reactors via a coupled CFD\\/direct Monte-Carlo method

The present study deals with simulation of agglomeration reactors, based on a CFD calculation method for describing the turbulent flow field, coupled with a direct Monte-Carlo method for the agglomeration process. The potentiality of stochastic methods, well suited for simulation of spherical agglomeration where complex kernels have to take into account the effects of the particles size and of the

L. Madec; L. Falk; E. Plasari

2001-01-01

131

Multivariate Monte Carlo Methods for the Reflection Grating Spectrometers on XMM-Newton

We propose a novel multivariate Monte Carlo method as an efficient and flexible approach to analyzing extended X-ray sources with the Reflection Grating Spectrometer (RGS) on XMM Newton. A multi-dimensional interpolation method is used to efficiently calculate the response function for the RGS in conjunction with an arbitrary spatially-varying spectral model. Several methods of event comparison that effectively compare the multivariate RGS data are discussed. The use of a multi-dimensional instrument Monte Carlo also creates many opportunities for the use of complex astrophysical Monte Carlo calculations in diffuse X-ray spectroscopy. The methods presented here could be generalized to other X-ray instruments as well.

Peterson, J.

2004-11-10

132

Multivariate Monte Carlo Methods for the Reflection Grating Spectrometers on XMM-Newton

We propose a novel multivariate Monte Carlo method as an efficient and flexible approach to analyzing extended X-ray sources with the Reflection Grating Spectrometer (RGS) on XMM Newton. A multi-dimensional interpolation method is used to efficiently calculate the response function for the RGS in conjunction with an arbitrary spatially-varying spectral model. Several methods of event comparison that effectively compare the multivariate RGS data are discussed. The use of a multi-dimensional instrument Monte Carlo also creates many opportunities for the use of complex astrophysical Monte Carlo calculations in diffuse X-ray spectroscopy. The methods presented here could be generalized to other X-ray instruments as well.

J. R. Peterson; J. G. Jernigan; S. M. Kahn

2004-10-26

133

Clock Quantum Monte Carlo: an imaginary-time method for real-time quantum dynamics

In quantum information theory, there is an explicit mapping between general unitary dynamics and Hermitian ground state eigenvalue problems known as the Feynman-Kitaev Clock. A prominent family of methods for the study of quantum ground states are quantum Monte Carlo methods, and recently the full configuration interaction quantum Monte Carlo (FCIQMC) method has demonstrated great promise for practical systems. We combine the Feynman-Kitaev Clock with FCIQMC to formulate a new technique for the study of quantum dynamics problems. Numerical examples using quantum circuits are provided as well as a technique to further mitigate the sign problem through time-dependent basis rotations. Moreover, this method allows one to combine the parallelism of Monte Carlo techniques with the locality of time to yield an effective parallel-in-time simulation technique.

McClean, Jarrod R

2014-01-01

134

Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.

Perfetti, Christopher M [ORNL] [ORNL; Martin, William R [University of Michigan] [University of Michigan; Rearden, Bradley T [ORNL] [ORNL; Williams, Mark L [ORNL] [ORNL

2012-01-01

135

Multi-recursive Monte-Carlo method for nonlinear system state estimation

Nonlinear state estimation methods have the essential advantage over linear methods. For state-space model of the general SISO nonlinear system, single-recursive Monte-Carlo (MC) state estimation method was presented, and then multi-recursive MC estimation method was proposed. With a typical and designed mathematical model, simulation shows that the multi-recursive MC method has higher estimation precision at the cost of longer operation

Shenghong Xu; Jinhua Wu

2004-01-01

136

Multiscale Finite-Difference-Diffusion-Monte-Carlo Method for Simulating Dendritic Solidification

We present a novel hybrid computational method to simulate accurately dendritic solidification in the low undercooling limit where the dendrite tip radius is one or more orders of magnitude smaller than the characteristic spatial scale of variation of the surrounding thermal or solutal diffusion field. The first key feature of this method is an efficient multiscale diffusion Monte Carlo (DMC)

Mathis Plapp; Alain Karma

2000-01-01

137

A Monte Carlo method for the PDF (Probability Density Functions) equations of turbulent flow

NASA Astrophysics Data System (ADS)

The transport equations of joint probability density functions (pdfs) in turbulent flows are simulated using a Monte Carlo method because finite difference solutions of the equations are impracticable, mainly due to the large dimensionality of the pdfs. Attention is focused on equation for the joint pdf of chemical and thermodynamic properties in turbulent reactive flows. It is shown that the Monte Carlo method provides a true simulation of this equation and that the amount of computation required increases only linearly with the number of properties considered. Consequently, the method can be used to solve the pdf equation for turbulent flows involving many chemical species and complex reaction kinetics. Monte Carlo calculations of the pdf of temperature in a turbulent mixing layer are reported. These calculations are in good agreement with the measurements of Batt (1977).

Pope, S. B.

1980-02-01

138

Monte Carlo method for the evaluation of symptom association.

Gastroesophageal monitoring is limited to 96 hours by the current technology. This work presents a computational model to investigate symptom association in gastroesophageal reflux disease with larger data samples proving important deficiencies of the current methodology that must be taking into account in clinical evaluation. A computational model based on Monte Carlo analysis was implemented to simulate patients with known statistical characteristics Thus, sets of 2000 10-day-long recordings were simulated and analyzed using the symptom index (SI), the symptom sensitivity index (SSI), and the symptom association probability (SAP). Afterwards, linear regression was applied to define the dependency of these indexes with the number of reflux, the number of symptoms, the duration of the monitoring, and the probability of association. All the indexes were biased estimators of symptom association and therefore they do not consider the effect of chance: when symptom and reflux were completely uncorrelated, the values of the indexes under study were greater than zero. On the other hand, longer recording reduced variability in the estimation of the SI and the SSI while increasing the value of the SAP. Furthermore, if the number of symptoms remains below one-tenth of the number of reflux episodes, it is not possible to achieve a positive value of the SSI. A limitation of this computational model is that it does not consider feeding and sleeping periods, differences between reflux episodes or causation. However, the conclusions are not affected by these limitations. These facts represent important limitations in symptom association analysis, and therefore, invasive treatments must not be considered based on the value of these indexes only until a new methodology provides a more reliable assessment. PMID:23082973

Barriga-Rivera, A; Elena, M; Moya, M J; Lopez-Alonso, M

2014-08-01

139

On sequential Monte Carlo sampling methods for Bayesian filtering

In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and non-Gaussian. A general importance sampling framework is developed that unifies many of the methods which have been proposed over the last few decades in several different

Arnaud Doucet; Simon Godsill; Christophe Andrieu

2000-01-01

140

On sequential Monte Carlo sampling methods for Bayesian filtering

In this article, we present an overview of methods for sequential simulation from posterior distribu- tions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and non-Gaussian. A general importance sampling framework is devel- oped that unifies many of the methods which have been proposed over the last few decades in

ARNAUD DOUCET; SIMON GODSILL; CHRISTOPHE ANDRIEU

2000-01-01

141

We present a new algorithm for calculating the Renyi entanglement entropy of interacting fermions using the continuous-time quantum Monte Carlo method. The algorithm only samples the interaction correction of the entanglement entropy, which by design ensures the efficient calculation of weakly interacting systems. Combined with Monte Carlo reweighting, the algorithm also performs well for systems with strong interactions. We demonstrate the potential of this method by studying the quantum entanglement signatures of the charge-density-wave transition of interacting fermions on a square lattice. PMID:25259962

Wang, Lei; Troyer, Matthias

2014-09-12

142

High-order Path Integral Monte Carlo methods for solving quantum dot problems

The conventional second-order Path Integral Monte Carlo method is plagued with the sign problem in solving many-fermion systems. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the ground state wave function at large imaginary time. In this work, we show that optimized fourth-order Path Integral Monte Carlo methods, which use no more than 5 free-fermion propagators, can yield accurate quantum dot energies for up to 20 polarized electrons with the use of the Hamiltonian energy estimator.

Chin, Siu A

2014-01-01

143

Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.

McKinley, M S; Brooks III, E D; Daffin, F

2004-12-13

144

A Hamiltonian Monte Carlo method for Bayesian Inference of Supermassive Black Hole Binaries

We investigate the use of a Hamiltonian Monte Carlo to map out the posterior density function for supermassive black hole binaries. While previous Markov Chain Monte Carlo (MCMC) methods, such as Metropolis-Hastings MCMC, have been successfully employed for a number of different gravitational wave sources, these methods are essentially random walk algorithms. The Hamiltonian Monte Carlo treats the inverse likelihood surface as a "gravitational potential" and by introducing canonical positions and momenta, dynamically evolves the Markov chain by solving Hamilton's equations of motion. We present an implementation of the Hamiltonian Markov Chain that is faster, and more efficient by a factor of approximately the dimension of the parameter space, than the standard MCMC.

Edward K. Porter; Jérôme Carré

2013-11-29

145

The Coupled Electron-Ion Monte Carlo Method

Twenty years ago Car and Parrinello introduced an efficient method to perform Molecular Dynamics simulation for classical\\u000a nuclei with forces computed on the “fly” by a Density Functional Theory (DFT) based electronic calculation [1]. Because the\\u000a method allowed study of the statistical mechanics of classical nuclei with many-body electronic interactions, it opened the\\u000a way for the use of simulation methods

Carlo Pierleoni; David M. Ceperley

2006-01-01

146

Review of quantum Monte Carlo methods and their applications

Correlation energy makes a small but very important contribution to the total energy of an electronic system. Among the traditional methods used to study electronic correlation are coupled clusters (CC), configuration interaction (CI) and manybody perturbation theory (MBPT) in quantum chemistry, and density functional theory (DFT) in solid state physics. An alternative method, which has been applied successfully to systems

Paulo H. Acioli

1997-01-01

147

Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.

Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.

2008-10-31

148

A Comparison of Four Clustering Methods Using MMPI Monte Carlo Data

Monte carlo procedures were used to generate data sets that resembled MMPI psychotic (8-6), neurotic (1-3-2), and personality disorder (4-9) pat terns. Lorr's clumping method, inverse factor analy sis, average linkage, and Ward's method were the clustering methods compared. The solutions were found to vary in terms of misclassifications and coverage. The clustering solutions also varied as a function of

Roger K. Blashfield; Leslie C. Morey

1980-01-01

149

Dose distributions in prostate brachytherapy: comparison between Sievert and Monte Carlo methods

NASA Astrophysics Data System (ADS)

In the present work we discuss two numerical methods for the calculation of 3D spatial dose distributions, and their application in HDR prostate brachytherapy. We include a comparison between the results obtained with the Sievert integral method and the use of tables generated by Monte Carlo simulations of radiation transport in a homogeneous medium for two types of 192Ir encapsulated sources commonly used in Mexican hospitals. The application of both methods to clinical data is also presented.

Almaraz, S.; Martínez-Dávalos, A.

2003-09-01

150

Quantum Monte Carlo method for the ground state of many-boson systems

We formulate a quantum Monte Carlo (QMC) method for calculating the ground state of many-boson systems. The method is based on a field-theoretical approach, and is closely related to existing fermion auxiliary-field QMC methods which are applied in several fields of physics. The ground-state projection is implemented as a branching random walk in the space of permanents consisting of identical

Wirawan Purwanto; Shiwei Zhang

2004-01-01

151

Monte Carlo simulation methods were used to investigate the absorbed dose distribution around several preliminary source configurations and the 3M Company models 6701, 6702, and 6711 I-125 seeds in water. Simulations for the preliminary sources, all of which were structurally simpler than the seeds, were conducted to demonstrate correct behavior of the computer software. The relative dose distributions of the

Gregory Scott Burns

1989-01-01

152

An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.

ERIC Educational Resources Information Center

Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…

Kim, Seock-Ho

2001-01-01

153

Calculation of the Entropy of random coil polymers with the hypothetical scanning Monte Carlo Method

. In this paper HSMC is extended to random coil polymers by applying it to self- avoiding walks on a squareCalculation of the Entropy of random coil polymers with the hypothetical scanning Monte Carlo Method Ronald P. White and Hagai Meirovitch* Department of Computational Biology, University

Meirovitch, Hagai

154

Calculation of channel flows of gases by the Monte-Carlo method with correction for collisions

Steady flow of a moderately rarefied gas in a circularly cylindrical channel was considered for the case of the flow being produced by the interaction of two flows colliding head-on. An iterative procedure based on the Monte Carlo method was used to solve the five-dimensional equations describing molecules with an arbitrary collisional cross section dependent on the relative velocity of

N. M. Bazarnova

1981-01-01

155

Application of Monte Carlo Method to Solve the Neutron Kinetics Equation for a Subcritical Assembly

There is a need to understand the space-dependent kinetics of fast or thermal reactor physics and the Monte Carlo method should be implemented in kinetics codes as well. In a transient accident (for example, control rod ejection accident or loss of coolant accident), changes in the system are much slower than the prompt neutron lifetime. In the present paper, a

Kohei IWANAGA; Hiroshi SEKIMOTO; Takamasa MORI

2008-01-01

156

American Option Pricing on Reconfigurable Hardware Using Least-Squares Monte Carlo Method

American Option Pricing on Reconfigurable Hardware Using Least-Squares Monte Carlo Method Xiang exercise of American-style options is one of the most important problems in option pricing theory. Unlike European options, American options have the feature of early exercise, which makes it hard to simulate

Arslan, Tughrul

157

Hybrid LSDA\\/Diffusion Quantum Monte-Carlo Method for Spin Sequences in Vertical Quantum Dots

We present an new hybrid Diffusion Quantum Monte-Carlo (DQMC)\\/Local Spin Density Approximation (LSDA) method, to compute the electronic structure of vertical quantum dots (VQD). The exact many-body electronic configuration is computed with a realistic confining potential. Our model confirms the atomic-like model of 2D shell structures obeying Hund's rule already predicted by LSDA.

P. Matagne; T. Wilkens; J. P. Leburton; R. Martin

2002-01-01

158

Quantum Monte Carlo methods for first principles simulation of liquid water

Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully ab initio quantum mechanical description of water that is accurate, efficient,

John Robert Gergely

2009-01-01

159

A Investigation of Exact Quantum Monte Carlo Methods for Small Molecules

The nonrelativistic ground-state energy of the hydrogen molecule is calculated using an improved Green's function quantum Monte Carlo method without the use of the Born-Oppenheimer or any other adiabatic approximation. A more accurate trial function for importance sampling and the use of exact cancellation combine to yield an energy which is a factor of ten more accurate than that of

Bin Chen

1994-01-01

160

Study of CANDU Thorium-based Fuel Cycles by Deterministic and Monte Carlo Methods

Study of CANDU Thorium-based Fuel Cycles by Deterministic and Monte Carlo Methods A. Nuttin1 , P, there is a renewal of interest in self-sustainable thorium fuel cycles applied to various concepts such as Molten Salt Reactors [1,2] or High Temperature Reactors [3,4]. Precise evaluations of the U-233 production

Paris-Sud XI, UniversitÃ© de

161

Mixed particle Monte Carlo method for deep submicron semiconductor device simulator

A particle simulation method is introduced in which two kinds of particle models are used in one device. A conventional Monte Carlo particle model is used in the region where nonstatic effects are evident, and a particle model based on Langevin's equation is used in the region where the drift-diffusion approximation is valid. In this way it is possible to

Gyo-young Jin; Young-june Park; Hong-shick Min

1991-01-01

162

Electrical conductivity of high-pressure liquid hydrogen by quantum Monte Carlo methods.

We compute the electrical conductivity for liquid hydrogen at high pressure using Monte Carlo techniques. The method uses coupled electron-ion Monte Carlo simulations to generate configurations of liquid hydrogen. For each configuration, correlated sampling of electrons is performed in order to calculate a set of lowest many-body eigenstates and current-current correlation functions of the system, which are summed over in the many-body Kubo formula to give ac electrical conductivity. The extrapolated dc conductivity at 3000 K for several densities shows a liquid semiconductor to liquid-metal transition at high pressure. Our results are in good agreement with shock-wave data. PMID:20366267

Lin, Fei; Morales, Miguel A; Delaney, Kris T; Pierleoni, Carlo; Martin, Richard M; Ceperley, D M

2009-12-18

163

Spin-orbit induced backflow in neutron matter with auxiliary field diffusion Monte Carlo method

NASA Astrophysics Data System (ADS)

The energy per particle of zero-temperature neutron matter is investigated, with particular emphasis on the role of the L?S interaction. An analysis of the importance of explicit spin-orbit correlations in the description of the system is carried out by the auxiliary field diffusion Monte Carlo method. The improved nodal structure of the guiding function, constructed by explicitly considering these correlations, lowers the energy. The proposed spin-backflow orbitals can also be conveniently used in the Green’s function Monte Carlo calculations of light nuclei.

Brualla, L.; Fantoni, S.; Sarsa, A.; Schmidt, K. E.; Vitiello, S. A.

2003-06-01

164

On Performance Measures for Infinite Swapping Monte Carlo Methods

We introduce and illustrate a number of performance measures for rare-event sampling methods. These measures are designed to be of use in a variety of expanded ensemble techniques including parallel tempering as well as infinite and partial infinite swapping approaches. Using a variety of selected applications we address questions concerning the variation of sampling performance with respect to key computational ensemble parameters.

J. D. Doll; Paul Dupuis

2014-10-14

165

Density-of-states Monte Carlo method for simulation of fluids Qiliang Yan, Roland Faller, and Juan, Wisconsin 53706 Received 6 November 2001; accepted 30 January 2002 A Monte Carlo method based on a density in the two-dimensional space of particle number and energy is used to estimate the density of states

Faller, Roland

166

While the use of Monte Carlo method has been prevalent in nuclear engineering, it has yet to fully blossom in the study of solute transport in porous media. By using an etched-glass micromodel, an attempt is made to apply Monte Carlo method...

Chung, Kiwhan

2012-06-07

167

Parametric weighted minimax estimates in Monte Carlo methods

NASA Astrophysics Data System (ADS)

In the similar trajectory method (STM), the numerical-statistical modeling of trajectories of particles (radiation quanta) is implemented by applying an auxiliary radiation model. Weighted estimates of the functionals are calculated simultaneously for a set physical parameters values. Theoretical and numerical aspects of choosing an auxiliary model with the aim of minimizing the parametric maximum of the mean-square error in weighted estimates are discussed. Previously known results concerning minimax STM algorithms are refined, and new assertions are obtained. The STM is used to numerically study the parametric dependence of the "transport approximation" error for the particle transmission, absorption, and reflection probabilities.

Mikhailov, G. A.; Rozhenko, S. A.

2013-09-01

168

A Monte Carlo investigation of the dual photopeak window scatter correction method [SPECT

Results from a Monte Carlo investigation of the dual photopeak window (DPW) scatter correction method are presented for point and extended sources of Tc-99m in both homogeneous and nonhomogeneous attenuating media. The DPW method uses the ratio of counts in two nonoverlapping energy windows within the photopeak region as input to a regression relation. A pixel-by-pixel estimate of the scatter

George J. Hademenos; Michael Ljungberg; Michael A. King; Stephen J. Glick

1993-01-01

169

Multivariate Monte Carlo Methods for the Reflection Grating Spectrometers on XMM-Newton

We propose a novel multivariate Monte Carlo method as an efficient and\\u000aflexible approach to analyzing extended X-ray sources with the Reflection\\u000aGrating Spectrometer (RGS) on XMM Newton. A multi-dimensional interpolation\\u000amethod is used to efficiently calculate the response function for the RGS in\\u000aconjunction with an arbitrary spatially-varying spectral model. Several methods\\u000aof event comparison that effectively compare the

J. R. Peterson; J. G. Jernigan; S. M. Kahn

2004-01-01

170

Linear multistep methods, particle filtering and sequential Monte Carlo

NASA Astrophysics Data System (ADS)

Numerical integration is the main bottleneck in particle filter methodologies for dynamic inverse problems to estimate model parameters, initial values, and non-observable components of an ordinary differential equation (ODE) system from partial, noisy observations, because proposals may result in stiff systems which first slow down or paralyze the time integration process, then end up being discarded. The immediate advantage of formulating the problem in a sequential manner is that the integration is carried out on shorter intervals, thus reducing the risk of long integration processes followed by rejections. We propose to solve the ODE systems within a particle filter framework with higher order numerical integrators which can handle stiffness and to base the choice of the variance of the innovation on estimates of the discretization errors. The application of linear multistep methods to particle filters gives a handle on the stability and accuracy of the propagation, and linking the innovation variance to the accuracy estimate helps keep the variance of the estimate as low as possible. The effectiveness of the methodology is demonstrated with a simple ODE system similar to those arising in biochemical applications.

Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki

2013-08-01

171

Time-step limits for a Monte Carlo Compton-scattering method

Compton scattering is an important aspect of radiative transfer in high energy density applications. In this process, the frequency and direction of a photon are altered by colliding with a free electron. The change in frequency of a scattered photon results in an energy exchange between the photon and target electron and energy coupling between radiation and matter. Canfield, Howard, and Liang have presented a Monte Carlo method for simulating Compton scattering that models the photon-electron collision kinematics exactly. However, implementing their technique in multiphysics problems that include the effects of radiation-matter energy coupling typically requires evaluating the material temperature at its beginning-of-time-step value. This explicit evaluation can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and present time-step limits that avoid instabilities and nonphysical oscillations by considering a spatially independent, purely scattering radiative-transfer problem. Examining a simplified problem is justified because it isolates the effects of Compton scattering, and existing Monte Carlo techniques can robustly model other physics (such as absorption, emission, sources, and photon streaming). Our analysis begins by simplifying the equations that are solved via Monte Carlo within each time step using the Fokker-Planck approximation. Next, we linearize these approximate equations about an equilibrium solution such that the resulting linearized equations describe perturbations about this equilibrium. We then solve these linearized equations over a time step and determine the corresponding eigenvalues, quantities that can predict the behavior of solutions generated by a Monte Carlo simulation as a function of time-step size and other physical parameters. With these results, we develop our time-step limits. This approach is similar to our recent investigation of time discretizations for the Compton-scattering Fokker-Planck equation.

Densmore, Jeffery D [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Lowrie, Robert B [Los Alamos National Laboratory

2008-01-01

172

Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe

NASA Astrophysics Data System (ADS)

This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic effects of the scattering reaction consistent with the subgroup method. In this study, we generalize the Discrete Angle Technique, already proposed for homogeneous, multigroup cross sections, to isotopic cross sections on the form of probability tables. In this technique, the angular density is discretized into probability tables. Similarly to the cross-section case, a moment approach is used to compute the probability tables for the scattering cosine. (4) The introduction of a leakage model based on the B1 fundamental mode approximation. Unlike deterministic lattice packages, most Monte Carlo-based lattice physics codes do not include leakage models. However the generation of homogenized and condensed group constants (cross sections, diffusion coefficients) require the critical flux. This project has involved the development of a program into the DRAGON framework, written in Fortran 2003 and wrapped with a driver in C, the GANLIB 5. Choosing Fortran 2003 has permitted the use of some modern features, such as the definition of objects and methods, data encapsulation and polymorphism. The validation of the proposed code has been performed by comparison with other numerical methods: (1) The continuous-energy Monte Carlo method of the SERPENT code. (2) The Collision Probability (CP) method and the discrete ordinates (SN) method of the DRAGON lattice code. (3) The multigroup Monte Carlo code MORET, coupled with the DRAGON code. Benchmarks used in this work are representative of some industrial configurations encountered in reactor and criticality-safety calculations: (1)Pressurized Water Reactors (PWR) cells and assemblies. (2) Canada-Deuterium Uranium Reactors (CANDU-6) clusters. (3) Critical experiments from the ICSBEP handbook (International Criticality Safety Benchmark Evaluation Program).

Martin, Nicolas

173

We propose a new variational Monte Carlo (VMC) method with an energy variance extrapolation for large-scale shell-model calculations. This variational Monte Carlo is a stochastic optimization method with a projected correlated condensed pair state as a trial wave function, and is formulated with the M-scheme representation of projection operators, the Pfaffian and the Markov-chain Monte Carlo (MCMC). Using this method, we can stochastically calculate approximated yrast energies and electro-magnetic transition strengths. Furthermore, by combining this VMC method with energy variance extrapolation, we can estimate exact shell-model energies.

Mizusaki, Takahiro

2012-01-01

174

We propose a new variational Monte Carlo (VMC) method with an energy variance extrapolation for large-scale shell-model calculations. This variational Monte Carlo is a stochastic optimization method with a projected correlated condensed pair state as a trial wave function, and is formulated with the M-scheme representation of projection operators, the Pfaffian and the Markov-chain Monte Carlo (MCMC). Using this method, we can stochastically calculate approximated yrast energies and electro-magnetic transition strengths. Furthermore, by combining this VMC method with energy variance extrapolation, we can estimate exact shell-model energies.

Takahiro Mizusaki; Noritaka Shimizu

2012-01-27

175

Mesh-based Monte Carlo method in time-domain widefield fluorescence molecular tomography.

We evaluated the potential of mesh-based Monte Carlo (MC) method for widefield time-gated fluorescence molecular tomography, aiming to improve accuracy in both shape discretization and photon transport modeling in preclinical settings. An optimized software platform was developed utilizing multithreading and distributed parallel computing to achieve efficient calculation. We validated the proposed algorithm and software by both simulations and in vivo studies. The results establish that the optimized mesh-based Monte Carlo (mMC) method is a computationally efficient solution for optical tomography studies in terms of both calculation time and memory utilization. The open source code, as part of a new release of mMC, is publicly available at http://mcx.sourceforge.net/mmc/. PMID:23224008

Chen, Jin; Fang, Qianqian; Intes, Xavier

2012-10-01

176

Mesh-based Monte Carlo method in time-domain widefield fluorescence molecular tomography

NASA Astrophysics Data System (ADS)

We evaluated the potential of mesh-based Monte Carlo (MC) method for widefield time-gated fluorescence molecular tomography, aiming to improve accuracy in both shape discretization and photon transport modeling in preclinical settings. An optimized software platform was developed utilizing multithreading and distributed parallel computing to achieve efficient calculation. We validated the proposed algorithm and software by both simulations and in vivo studies. The results establish that the optimized mesh-based Monte Carlo (mMC) method is a computationally efficient solution for optical tomography studies in terms of both calculation time and memory utilization. The open source code, as part of a new release of mMC, is publicly available at

Chen, Jin; Fang, Qianqian; Intes, Xavier

2012-10-01

177

Polynomial Expansion Method for the Monte Carlo Calculation of Strongly Correlated Electron Systems

\\u000a We present a new Monte Carlo algorithm for a class of strongly correlated electron systems where electrons are strongly coupled\\u000a to thermodynamically fluctuating classical fields. As an example, the method can be applied to electron- spin coupled systems\\u000a such as the double-exchange model and dilute magnetic semiconductors, as well as electron-phonon systems. In these systems,\\u000a calculation of the Boltzmann weights

N. Furukawa; Y. Motome

178

A Monte Carlo method for quantum spins using boson world lines

A new Monte Carlo method is described for quantum (s= 1\\/2) spins which maps the spin model onto a model of hard-core bosons. The Hamiltonian is then broken up into kinetic and\\u000a potential parts and the Trotter formula used to simulate the Bose system. The power of this mapping comes from the fact that,\\u000a by letting the system evolve through

E. Loh

1986-01-01

179

Application of the vector Monte-Carlo method in polarisation optical coherence tomography

The vector Monte-Carlo method is developed and applied to polarisation optical coherence tomography. The basic principles of simulation of the propagation of polarised electromagnetic radiation with a small coherence length are considered under conditions of multiple scattering. The results of numerical simulations for Rayleigh scattering well agree with the Milne solution generalised to the case of an electromagnetic field and with theoretical calculations in the diffusion approximation. (special issue devoted to multiple radiation scattering in random media)

Churmakov, D Yu [Cranfield Health, Cranfield University, Silsoe (United Kingdom); Kuz'min, V L [Saint Petersburg Institute of Commerce and Economics (Russian Federation); Meglinskii, I V [Department of Physics, N G Chernyshevskii Saratov State University, Saratov (Russian Federation)

2006-11-30

180

Spin-orbit induced backflow in neutron matter with auxiliary field diffusion Monte Carlo method

The energy per particle of zero-temperature neutron matter is investigated, with particular emphasis on the role of the LsS interaction. An analysis of the importance of explicit spin-orbit correlations in the description of the system is carried out by the auxiliary field diffusion Monte Carlo method. The improved nodal structure of the guiding function, constructed by explicitly considering these correlations,

L. Brualla; S. Fantoni; A. Sarsa; K. E. Schmidt; S. A. Vitiello

2003-01-01

181

Quantum Monte-Carlo method applied to Non-Markovian barrier transmission

In nuclear fusion and fission, fluctuation and dissipation arise due to the coupling of collective degrees of freedom with internal excitations. Close to the barrier, both quantum, statistical and non-Markovian effects are expected to be important. In this work, a new approach based on quantum Monte-Carlo addressing this problem is presented. The exact dynamics of a system coupled to an environment is replaced by a set of stochastic evolutions of the system density. The quantum Monte-Carlo method is applied to systems with quadratic potentials. In all range of temperature and coupling, the stochastic method matches the exact evolution showing that non-Markovian effects can be simulated accurately. A comparison with other theories like Nakajima-Zwanzig or Time-ConvolutionLess ones shows that only the latter can be competitive if the expansion in terms of coupling constant is made at least to fourth order. A systematic study of the inverted parabola case is made at different temperatures and coupling constants. The asymptotic passing probability is estimated in different approaches including the Markovian limit. Large differences with the exact result are seen in the latter case or when only second order in the coupling strength is considered as it is generally assumed in nuclear transport models. On opposite, if fourth order in the coupling or quantum Monte-Carlo method is used, a perfect agreement is obtained.

G. Hupin; D. Lacroix

2010-01-05

182

Monte Carlo Methods in Materials Science Based on FLUKA and ROOT

NASA Technical Reports Server (NTRS)

A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the CERN ALICE (A Large Ion Collisions Experiment) software team through an adaptation of their existing AliROOT (ALICE Using ROOT) architecture. In order to check our progress against actual data, we have chosen to simulate the ATIC14 (Advanced Thin Ionization Calorimeter) cosmic-ray astrophysics balloon payload as well as neutron fluences in the Mir spacecraft. This paper contains a summary of status of this project, and a roadmap to its successful completion.

Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor

2003-01-01

183

Modeling of near-continuum flows using the direct simulation Monte Carlo method

NASA Astrophysics Data System (ADS)

The direct simulation Monte Carlo (DSMC) method is used to model the flow of a hypersonic stream about a wedge. The Knudsen number of 0.00075 puts the flow into the continuum category and hence is a challenge for the DSMC method. The modeled flowfield is shown to agree extremely well with the experimental measurements in the wedge wake taken by Batt (1967). This experimental confirmation serves as a rigorous validation of the DSMC method and provides guidelines for computations of near-continuum flows.

Lohn, P. D.; Haflinger, D. E.; McGregor, R. D.; Behrens, H. W.

1990-06-01

184

NASA Astrophysics Data System (ADS)

The effective delayed neutron fraction ? plays an important role in kinetics and static analysis of the reactor physics experiments. It is used as reactivity unit referred to as "dollar". Usually, it is obtained by computer simulation due to the difficulty in measuring it experimentally. In 1965, Keepin proposed a method, widely used in the literature, for the calculation of the effective delayed neutron fraction ?. This method requires calculation of the adjoint neutron flux as a weighting function of the phase space inner products and is easy to implement by deterministic codes. With Monte Carlo codes, the solution of the adjoint neutron transport equation is much more difficult because of the continuous-energy treatment of nuclear data. Consequently, alternative methods, which do not require the explicit calculation of the adjoint neutron flux, have been proposed. In 1997, Bretscher introduced the k-ratio method for calculating the effective delayed neutron fraction; this method is based on calculating the multiplication factor of a nuclear reactor core with and without the contribution of delayed neutrons. The multiplication factor set by the delayed neutrons (the delayed multiplication factor) is obtained as the difference between the total and the prompt multiplication factors. Using Monte Carlo calculation Bretscher evaluated the ? as the ratio between the delayed and total multiplication factors (therefore the method is often referred to as the k-ratio method). In the present work, the k-ratio method is applied by Monte Carlo (MCNPX) and deterministic (PARTISN) codes. In the latter case, the ENDF/B nuclear data library of the fuel isotopes (235U and 238U) has been processed by the NJOY code with and without the delayed neutron data to prepare multi-group WIMSD neutron libraries for the lattice physics code DRAGON, which was used to generate the PARTISN macroscopic cross sections. In recent years Meulekamp and van der Marck in 2006 and Nauchi and Kameyama in 2005 proposed new methods for the effective delayed neutron fraction calculation with only one Monte Carlo computer simulation, compared with the k-ratio method which require two criticality calculations. In this paper, the Meulekamp/Marck and Nauchi/Kameyama methods are applied for the first time by the MCNPX computer code and the results obtained by all different methods are compared.

Zhong, Zhaopeng; Talamo, Alberto; Gohar, Yousry

2013-07-01

185

A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model) for deep penetration problems such as examined in this paper. In this research, we investigate the application of a variant of the hybrid Monte Carlo-deterministic method proposed by Cooper and Larsen to global deep penetration problems involving binary stochastic media. To our knowledge, hybrid Monte Carlo-deterministic methods have not previously been applied to problems involving a stochastic medium. We investigate two approaches for computing the approximate deterministic estimate of the forward scalar flux distribution used to automatically generate the weight windows. The first approach uses the atomic mix approximation to the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. The second approach uses the Levermore-Pomraning model for the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. In both cases, we use Monte Carlo Algorithm B with weight windows automatically generated from the approximate forward scalar flux distribution to obtain the solution of the transport problem.

Keady, K P; Brantley, P

2010-03-04

186

Quasi-Monte Carlo methods for lattice systems: A first look

NASA Astrophysics Data System (ADS)

We investigate the applicability of quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this behavior for certain problems to N-1, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling. Catalogue identifier: AERJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence version 3 No. of lines in distributed program, including test data, etc.: 67759 No. of bytes in distributed program, including test data, etc.: 2165365 Distribution format: tar.gz Programming language: C and C++. Computer: PC. Operating system: Tested on GNU/Linux, should be portable to other operating systems with minimal efforts. Has the code been vectorized or parallelized?: No RAM: The memory usage directly scales with the number of samples and dimensions: Bytes used = “number of samples” × “number of dimensions” × 8 Bytes (double precision). Classification: 4.13, 11.5, 23. External routines: FFTW 3 library (http://www.fftw.org) Nature of problem: Certain physical models formulated as a quantum field theory through the Feynman path integral, such as quantum chromodynamics, require a non-perturbative treatment of the path integral. The only known approach that achieves this is the lattice regularization. In this formulation the path integral is discretized to a finite, but very high dimensional integral. So far only Monte Carlo, and especially Markov chain-Monte Carlo methods like the Metropolis or the hybrid Monte Carlo algorithm have been used to calculate approximate solutions of the path integral. These algorithms often lead to the undesired effect of autocorrelation in the samples of observables and suffer in any case from the slow asymptotic error behavior proportional to N, if N is the number of samples. Solution method: This program applies the quasi-Monte Carlo approach and the reweighting technique (respectively the weighted uniform sampling method) to generate uncorrelated samples of observables of the anharmonic oscillator with an improved asymptotic error behavior. Unusual features: The application of the quasi-Monte Carlo approach is quite revolutionary in the field of lattice field theories. Running time: The running time depends directly on the number of samples N and dimensions d. On modern computers a run with up to N=216=65536 (including 9 replica runs) and d=100 should not take much longer than one minute.

Jansen, K.; Leovey, H.; Ammon, A.; Griewank, A.; Müller-Preussker, M.

2014-03-01

187

Analysis of single Monte Carlo methods for prediction of reflectance from turbid media

Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904

Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan

2011-01-01

188

Beyond the Born-Oppenheimer approximation with quantum Monte Carlo methods

NASA Astrophysics Data System (ADS)

In this work we develop tools that enable the study of nonadiabatic effects with variational and diffusion Monte Carlo methods. We introduce a highly accurate wave-function ansatz for electron-ion systems that can involve a combination of both clamped ions and quantum nuclei. We explicitly calculate the ground-state energies of H2, LiH, H2O, and FHF- using fixed-node quantum Monte Carlo with wave-function nodes that explicitly depend on the ion positions. The obtained energies implicitly include the effects arising from quantum nuclei and electron-nucleus coupling. We compare our results to the best theoretical and experimental results available and find excellent agreement.

Tubman, Norm M.; Kylänpää, Ilkka; Hammes-Schiffer, Sharon; Ceperley, David M.

2014-10-01

189

Extrapolation method in the Monte Carlo Shell Model and its applications

We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking {sup 56}Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as {sup 72}Ge with f5pg9-shell. The structure of {sup 72}Se is also studied including the discussion of the shape-coexistence phenomenon.

Shimizu, Noritaka; Abe, Takashi [Department of Physics, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); Utsuno, Yutaka [Advanced Science Research Center, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1195 (Japan); Mizusaki, Takahiro [Institute of Natural Sciences, Senshu University, Tokyo, 101-8425 (Japan); Otsuka, Takaharu [Department of Physics, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); Center for Nuclear Study, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, Michigan (United States); Honma, Michio [Center for Mathematical Sciences, Aizu University, Aizu-Wakamatsu, Fukushima 965-8580 (Japan)

2011-05-06

190

NASA Astrophysics Data System (ADS)

A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented.

Sharma, Anupam; Long, Lyle N.

2004-10-01

191

NASA Astrophysics Data System (ADS)

Plasma flows with high Knudsen numbers cannot be treated with classic continuum methods, as represented for example by the Navier-Stokes or the magnetohydrodynamic equations. Instead, the more fundamental Boltzmann equation has to be solved, which is done here approximately by particle based methods that also allow for thermal and chemical non-equilibrium. The Particle-In-Cell method is used to treat the collisionless Vlasov-Maxwell system, while neutral reactive flows are treated by the Direct Simulation Monte Carlo method. In this article, a combined approach is presented that allows the simulation of reactive, partially or fully ionized plasma flows. Both particle methods are briefly outlined and the coupling and parallelization strategies are described. As an example, the results of a streamer discharge simulation are presented and discussed in order to demonstrate the capabilities of the coupled method.

Munz, Claus-Dieter; Auweter-Kurtz, Monika; Fasoulas, Stefanos; Mirza, Asim; Ortwein, Philip; Pfeiffer, Marcel; Stindl, Torsten

2014-10-01

192

Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas

Highlights: •A general approach to Monte Carlo methods for multicomponent plasmas is proposed. •We show numerical tests for the two-component (electrons and ions) case. •An optimal choice of parameters for speeding up the computations is discussed. •A rigorous estimate of the error of approximation is proved. -- Abstract: A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau–Fokker–Planck equations by Boltzmann equations of quasi-Maxwellian kind. It means that the total collision frequency for the corresponding Boltzmann equation does not depend on the velocities. This allows to make the simulation process very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes the well-known methods of Takizuka and Abe (1977) [12] and Nanbu (1997) as particular cases, and generalizes the approach of Bobylev and Nanbu (2000). The numerical scheme of this paper is simpler than the schemes by Takizuka and Abe [12] and by Nanbu. We derive it for the general case of multicomponent plasmas and show some numerical tests for the two-component (electrons and ions) case. An optimal choice of parameters for speeding up the computations is also discussed. It is also proved that the order of approximation is not worse than O(?(?)), where ? is a parameter of approximation being equivalent to the time step ?t in earlier methods. A similar estimate is obtained for the methods of Takizuka and Abe and Nanbu.

Bobylev, A.V., E-mail: alexander.bobylev@kau.se [Department of Mathematics, Karlstad University, SE-65188 Karlstad (Sweden); Potapenko, I.F., E-mail: firena@yandex.ru [Keldysh Institute for Applied Mathematics, RAS, 125047 Moscow (Russian Federation)

2013-08-01

193

, Computer Go. ! 1 INTRODUCTION MONTE Carlo Tree Search (MCTS) is a method for finding optimal decisions A Survey of Monte Carlo Tree Search Methods Cameron Browne, Member, IEEE, Edward Powley, Member, IEEE that the field is ripe for future work. Index Terms--Monte Carlo Tree Search (MCTS), Upper Confidence Bounds (UCB

Lucas, Simon M.

194

In this note we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as {Delta}t{yields}{infinity} as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method alleviates both the nonphysical overheating that occurs in standard IMC when the time step is large and significantly diminishes the statistical noise in the solution.

Mcclarren, Ryan G [Los Alamos National Laboratory; Urbatsch, Todd J [Los Alamos National Laboratory

2008-01-01

195

The narrow resonance (NR) approximation has, in the past, been applied to regular lattices with fairly simple unit cells. Attempts to use the NR approximation to deal with fine details of the lattice structure, or with complicated lattice cells, have generally been based on assumptions and approximations that are rather difficult to evaluate. A benchmark method is developed in which slowing down is still treated in the NR approximation, but spatial neutron transport is handled by Monte Carlo. This benchmark method is used to evaluate older methods for analyzing the double-heterogeneity effect in fast reactors, and for computing resonance integrals in the PROTEUS lattices. New methods for treating the PROTEUS lattices are proposed.

Chen, I.J.; Gelbard, E.M.

1988-07-01

196

Kinetic Monte Carlo Method for Rule-based Modeling of Biochemical Networks

We present a kinetic Monte Carlo method for simulating chemical transformations specified by reaction rules, which can be viewed as generators of chemical reactions, or equivalently, definitions of reaction classes. A rule identifies the molecular components involved in a transformation, how these components change, conditions that affect whether a transformation occurs, and a rate law. The computational cost of the method, unlike conventional simulation approaches, is independent of the number of possible reactions, which need not be specified in advance or explicitly generated in a simulation. To demonstrate the method, we apply it to study the kinetics of multivalent ligand-receptor interactions. We expect the method will be useful for studying cellular signaling systems and other physical systems involving aggregation phenomena. PMID:18851068

Yang, Jin; Monine, Michael I.; Faeder, James R.; Hlavacek, William S.

2009-01-01

197

CCMR: Method Development of Dynamic Mass Diffusion Monte Carlo using Lennard-Jones Clusters

NSDL National Science Digital Library

The Lennard-Jones clusters, clusters of inert particles have a long history of being studied. Many algorithms have been proposed and used with a varying level of success from "basin hopping" [1] to âgreedy searchâ [2]. Despite these achievements, the Lennard-Jones potential continues to be a widely studied model. Not only is it a good test case for new particle structure algorithms, but it is still an interesting model that we can yet learn from. In this project we proposed to study these cluster using a method never before attempted: dynamic mass diffusion Monte Carlo.

Craig, Helen A.

2007-08-29

198

Efficient implementation of a Monte Carlo method for uncertainty evaluation in dynamic measurements

NASA Astrophysics Data System (ADS)

Measurement of quantities having time-dependent values such as force, acceleration or pressure is a topic of growing importance in metrology. The application of the Guide to the Expression of Uncertainty in Measurement (GUM) and its Supplements to the evaluation of uncertainty for such quantities is challenging. We address the efficient implementation of the Monte Carlo method described in GUM Supplements 1 and 2 for this task. The starting point is a time-domain observation equation. The steps of deriving a corresponding measurement model, the assignment of probability distributions to the input quantities in the model, and the propagation of the distributions through the model are all considered. A direct implementation of a Monte Carlo method can be intractable on many computers since the storage requirement of the method can be large compared with the available computer memory. Two memory-efficient alternatives to the direct implementation are proposed. One approach is based on applying updating formulae for calculating means, variances and point-wise histograms. The second approach is based on evaluating the measurement model sequentially in time. A simulated example is used to compare the performance of the direct and alternative procedures.

Eichstädt, S.; Link, A.; Harris, P.; Elster, C.

2012-06-01

199

NASA Astrophysics Data System (ADS)

The transcorrelated (TC) method is a useful approach to optimize the Jastrow-Slater-type many-body wave function FD. The basic idea of the TC method [1] is based on the similarity transformation of a many-body Hamiltonian H with respect to the Jastrow factor F: HTC=frac1F H F in order to incorporate the correlation effect into HTC. Both the F and D are optimized by minimizing the variance ^2=|Hrm TCD - E D |^2 d^3N x. The optimization for F is implemented by the variational Monte Carlo calculation, and D is determined by the TC self-consistent-field equation for the one-body wave functions ??(x), which is derived from the functional derivative of ^2 with respect to ?mu(x). In this talk, we will present the results given by the transcorrelated variational Monte Carlo (TC-VMC) method for the ground state [2] and the excited states of atoms [3]. [1]S. F. Boys and N. C. Handy, Proc. Roy. Soc. A, 309, 209; 310, 43; 310, 63; 311, 309 (1969). [2]N. Umezawa and S. Tsuneyuki, J. Chem. Phys. 119, 10015 (2003). [3]N. Umezawa and S. Tsuneyuki, J. Chem. Phys. 121, 7070 (2004).

Umezawa, Naoto; Tsuneyuki, Shinji; Ohno, Takahisa; Shiraishi, Kenji; Chikyow, Toyohiro

2005-03-01

200

A new Monte Carlo method for dynamical evolution of non-spherical stellar systems

We have developed a novel Monte Carlo method for simulating the dynamical evolution of stellar systems in arbitrary geometry. The orbits of stars are followed in a smooth potential represented by a basis-set expansion and perturbed after each timestep using local velocity diffusion coefficients from the standard two-body relaxation theory. The potential and diffusion coefficients are updated after an interval of time that is a small fraction of the relaxation time, but may be longer than the dynamical time. Thus our approach is a bridge between the Spitzer's formulation of the Monte Carlo method and the temporally smoothed self-consistent field method. The primary advantages are the ability to follow the secular evolution of shape of the stellar system, and the possibility of scaling the amount of two-body relaxation to the necessary value, unrelated to the actual number of particles in the simulation. Possible future applications of this approach in galaxy dynamics include the problem of consumption of stars...

Vasiliev, Eugene

2014-01-01

201

NASA Astrophysics Data System (ADS)

Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC) methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC) methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP), is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF) and the sequential importance resampling (SIR) particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Kim, S.

2011-10-01

202

Enhanced parameter estimation with GLLS and the Bootstrap Monte Carlo method for dynamic SPECT.

The generalized linear least squares (GLLS) method has been shown to successfully construct unbiased parametric images from dynamic positron emission tomography (PET). However, the high level of noise intrinsic in single photon emission computed tomography (SPECT) can give rise to unsuccessful voxel-wise fitting using GLLS, resulting in physiologically meaningless estimates, such as negative kinetic parameters for compartment models. In this study, three approaches were investigated to improve the reliability of GLLS applied to dynamic SPECT data. The simulation and experimental results showed that GLLS with the aid of Bootstrap Monte Carlo method proved successful in generating parametric images and preserving all of the major advantages of all the originally GLLS method, although at the expense of increased computation time. PMID:17945588

Wen, Lingfeng; Eberl, Stefan; Feng, Dagan

2006-01-01

203

Method to account for arbitrary strains in kinetic Monte Carlo simulations

NASA Astrophysics Data System (ADS)

We present a method for efficiently recomputing rates in a kinetic Monte Carlo simulation when the existing rate catalog is modified by the presence of a strain field. We use the concept of the dipole tensor to estimate the changes in the kinetic barriers that comprise the catalog, thereby obviating the need for recomputing them from scratch. The underlying assumptions in the method are that linear elasticity is valid, and that the topology of the underlying potential energy surface (and consequently, the fundamental structure of the rate catalog) is not changed by the strain field. As a simple test case, we apply the method to a single vacancy in zirconium diffusing in the strain field of a dislocation, and discuss the consequences of the assumptions on simulating more complex materials.

Subramanian, Gopinath; Perez, Danny; Uberuaga, Blas P.; Tomé, Carlos N.; Voter, Arthur F.

2013-04-01

204

A Deterministic-Monte Carlo Hybrid Method for Time-Dependent Neutron Transport Problems

A new deterministic-Monte Carlo hybrid solution technique is derived for the time-dependent transport equation. This new approach is based on dividing the time domain into a number of coarse intervals and expanding the transport solution in a series of polynomials within each interval. The solutions within each interval can be represented in terms of arbitrary source terms by using precomputed response functions. In the current work, the time-dependent response function computations are performed using the Monte Carlo method, while the global time-step march is performed deterministically. This work extends previous work by coupling the time-dependent expansions to space- and angle-dependent expansions to fully characterize the 1D transport response/solution. More generally, this approach represents and incremental extension of the steady-state coarse-mesh transport method that is based on global-local decompositions of large neutron transport problems. An example of a homogeneous slab is discussed as an example of the new developments.

Justin Pounders; Farzad Rahnema

2001-10-01

205

Many important physiological processes operate at time and space scales far beyond those accessible to atom-realistic simulations, and yet discrete stochastic rather than continuum methods may best represent finite numbers of molecules interacting in complex cellular spaces. We describe and validate new tools and algorithms developed for a new version of the MCell simulation program (MCell3), which supports generalized Monte Carlo modeling of diffusion and chemical reaction in solution, on surfaces representing membranes, and combinations thereof. A new syntax for describing the spatial directionality of surface reactions is introduced, along with optimizations and algorithms that can substantially reduce computational costs (e.g., event scheduling, variable time and space steps). Examples for simple reactions in simple spaces are validated by comparison to analytic solutions. Thus we show how spatially realistic Monte Carlo simulations of biological systems can be far more cost-effective than often is assumed, and provide a level of accuracy and insight beyond that of continuum methods. PMID:20151023

KERR, REX A.; BARTOL, THOMAS M.; KAMINSKY, BORIS; DITTRICH, MARKUS; CHANG, JEN-CHIEN JACK; BADEN, SCOTT B.; SEJNOWSKI, TERRENCE J.; STILES, JOEL R.

2010-01-01

206

NASA Astrophysics Data System (ADS)

Computer simulations of light transport in multi-layered turbid media are an effective way to theoretically investigate light transport in tissue, which can be applied to the analysis, design and optimization of optical coherence tomography (OCT) systems. We present a computationally efficient method to calculate the diffuse reflectance due to ballistic and quasi-ballistic components of photons scattered in turbid media, which represents the signal in optical coherence tomography systems. Our importance sampling based Monte Carlo method enables the calculation of the OCT signal with less than one hundredth of the computational time required by the conventional Monte Carlo method. It also does not produce a systematic bias in the statistical result that is typically observed in existing methods to speed up Monte Carlo simulations of light transport in tissue. This method can be used to assess and optimize the performance of existing OCT systems, and it can also be used to design novel OCT systems.

Lima, Ivan T., Jr.; Kalra, Anshul; Hernández-Figueroa, Hugo E.; Sherif, Sherif S.

2012-03-01

207

Charged-Particle Thermonuclear Reaction Rates: I. Monte Carlo Method and Statistical Distributions

A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended "classical" rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless "minimum" (or "lower limit") and "maximum" (or "upper limit") reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters miu and sigma. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this series (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this series (Paper III). In the fourth paper of this series (Paper IV) we compare our new reaction rates to previous results.

Richard Longland; Christian Iliadis; Art Champagne; Joe Newton; Claudio Ugalde; Alain Coc; Ryan Fitzgerald

2010-04-23

208

Graduiertenschule Hybrid Monte Carlo

Graduiertenschule Hybrid Monte Carlo SS 2005 Heermann - UniversitÂ¨at Heidelberg Seite 1 #12;Graduiertenschule Â· In conventional Monte-Carlo (MC) calculations of condensed matter systems, such as an N probability distribution, unlike Monte-Carlo calculations. Â· The Hybrid Monte-Carlo (HMC) method combines

Heermann, Dieter W.

209

Odd-Particle Systems in the Shell Model Monte Carlo Method: Circumventing a Sign Problem

NASA Astrophysics Data System (ADS)

The shell model Monte Carlo method is a powerful technique to calculate thermal and ground-state properties of strongly correlated finite-size systems. However, its application to odd-particle-number systems has been hampered by the sign problem that originates from the projection on an odd number of particles. We circumvent this sign problem for the ground-state energy by extracting the ground-state energy of the odd-particle-number system from the asymptotic behavior of the imaginary-time single-particle Green’s function of the even-particle-number system. We apply this method to calculate pairing gaps of nuclei in the iron region. Our results are in good agreement with experimental pairing gaps.

Mukherjee, Abhishek; Alhassid, Y.

2012-07-01

210

A Monte-Carlo Method for Particle Acceleration at Multiple Shocks in Blazar Jets

NASA Astrophysics Data System (ADS)

We present a new Monte-Carlo method for particle acceleration and apply it to multiple shocks. These calculations are relevant to blazars, since they extend the single region "homogeneous models" to include multiple emission regions. Previous analytic and numerical work on multiple shocks has assumed them to be well separated in time or space. We lift this restriction by using a system of stochastic differential equations equivalent to the diffusion-convection equation for energetic particles (Kruells & Achterberg 1994) and implementing a new semi-implicit integration method. The results exhibit the flat spectrum implied by blazar radio emission together with a piling-up effect due to synchrotron losses. At even higher momenta, single shock acceleration takes over and the spectrum shows a power-law tail, which may be relevant to the hard X-ray emission from Blazars.

Marcowith, A.; Kirk, J.

211

Transient condensation of vapor using a direct simulation Monte Carlo method

Vapor is produced from the ICF event as the x-ray energy is deposited at the first wall of the reactor. This vapor must condense back onto the first wall in a timely fashion (<< 1 s) to establish the necessary conditions for beam propagation and the next ICF event. Transient condensation of vapor is studied on the basis of the Boltzmann equation using a direct simulation Monte Carlo Method. The method describes the molecular behavior of continuum mechanics transition flows in a way consistent with the Boltzmann equation. The thermal resistance of the condensed film is included in the flow representation using a laminar Nusselt analysis to determine the interface temperature of the condensed film. The condensate mass flux in a quasi-steady state is computed and compared with a number of analytical models and experimental data. The results are consistent qualitatively with the experimental data of mercury condensation on a vertical plate.

El-Afify, M.M.; Corradini, M.L.

1989-03-01

212

Electronic structure of transition metal and f-electron oxides by quantum Monte Carlo methods

NASA Astrophysics Data System (ADS)

We report on many-body quantum Monte Carlo (QMC) calculations of electronic structure of systems with strong correlation effects. These methods have been applied to ambient and high pressure transition metal oxides and, very recently, to selected f-electron oxides such as mineral thorianite (ThO2). QMC methods enabled us to calculate equilibrium characteristics such as cohesion, equilibrium lattice constants, bulk moduli, and electronic gaps with an excellent agreement with experiment without any non-variational parameters. In addition, for selected cases, the equations of state were calculated as well. The calculations were carried out using the state-of-the-art twist-averaged sampling of the Brilloiun zone, small-core Dirac-Fock pseudopotentials and one-particle orbitals from hybrid DFT functionals with varying weight of the exact exchange. This enabled us to build high-accuracy Slater-Jastrow explicitly correlated wavefunctions. In particular, we have employed optimization of the weight of the exact exchange in B3LYP and PBE0 functionals to minimize the fixed-node error in the diffusion Monte Carlo calculations. Instead of empirical fitting, we therefore use variational and explicitly many-body QMC method to find the value of the optimal weight, which falls between 15 and 30%. This finding is further supported also by recent calculations of transition metal-organic systems such as transition metal-porphyrins and others, showing thus a very wide range of its applicability. The calculations of ThO_2 appears to follow the same pattern and enabled to reproduce very well the experimental cohesion and very large electronic gap. In addition, we have made an important progress also in explicit treatment of the spin-orbit interaction which has been so far neglected in QMC calculations. Our studies illustrate the remarkable capabilities of QMC methods for strongly correlated solid systems.

Mitas, L.; Hu, S.; Kolorenc, J.

2012-12-01

213

Monte Carlo Method for a Quantum Measurement Process by a Single-Electron Transistor

We derive the quantum trajectory or stochastic (conditional) master equation for a single superconducting Cooper-pair box (SCB) charge qubit measured by a single-electron transistor (SET) detector. This stochastic master equation describes the random evolution of the measured SCB qubit density matrix which both conditions and is conditioned on a particular realization of the measured electron tunneling events through the SET junctions. Hence it can be regarded as a Monte Carlo method that allows us to simulate the continuous quantum measurement process. We show that the master equation for the "partially" reduced density matrix [Y. Makhlin et.al., Phys. Rev. Lett. 85, 4578 (2000)] can be obtained when a "partial" average is taken on the stochastic master equation over the fine grained measurement records of the tunneling events in the SET. Finally, we present some Monte Carlo simulation results for the SCB/SET measurement process. We also analyze the probability distribution P(m,t) of finding m electrons that have tunneled into the drain of the SET in time t to demonstrate the connection between the quantum trajectory approach and the "partially" reduced density matrix approach.

Hsi-Sheng Goan

2004-06-15

214

Monte Carlo method to study the proton fluence for treatment planning.

The proton beam at the Hahn Meitner Institute (HMI) in Berlin will be used for proton therapy of eye melanoma in the near future. As part of the pre-therapeutic studies, Monte Carlo calculations have been performed to investigate the primary fluence distribution of the proton beam including the influence of scattering foils, range shifters, modulator wheels, and collimators. Any material in the beam path will modify the therapeutic beam because of energy loss, multiple scattering, range straggling, and nuclear reactions. The primary fluence information is a pre-requisite for most pencil-beam treatment planning algorithms. The measured beam penumbra has been used as one of the parameters to characterize a proton beam for further calculations in a treatment planning algorithm. However, this phenomenological quantity represents only indirect information about the properties of the proton beam. In this work, an alternative parameterization of the beam exiting the vacuum window of the accelerator, as well as the beam right in front of the patient collimator, is introduced. A beam is fully characterized if one knows (for instance from Monte Carlo simulations) the particle distribution in energy, position, and angle, i.e., the phase space distribution. Therefore, parameters derived from this distribution can provide an alternative input in treatment planning algorithms. In addition, the method of calculation is introduced as a tool to investigate the influence of modifications in the beam delivery system on the behavior of the therapeutic proton beam. PMID:9874829

Paganetti, H

1998-12-01

215

The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)

Bianchini, G.; Burgio, N.; Carta, M. [ENEA C.R. CASACCIA, via Anguillarese, 301, 00123 S. Maria di Galeria Roma (Italy); Peluso, V. [ENEA C.R. BOLOGNA, Via Martiri di Monte Sole, 4, 40129 Bologna (Italy); Fabrizio, V.; Ricci, L. [Univ. of Rome La Sapienza, C/o ENEA C.R. CASACCIA, via Anguillarese, 301, 00123 S. Maria di Galeria Roma (Italy)

2012-07-01

216

Coupled proton/neutron transport calculations using the S sub N and Monte Carlo methods

Coupled charged/neutral article transport calculations are most often carried out using the Monte Carol technique. For example, the ITS, EGS, and MCNP (Version 4) codes are used extensively for electron/photon transport calculations while HETC models the transport of protons, neutrons and heavy ions. In recent years there has been considerable progress in deterministic models of electron transport, and many of these models are applicable to protons. However, even with these new models (and the well established models for neutron transport) deterministic coupled neutron/proton transport calculations have not been feasible for most problems of interest, due to a lack of coupled multigroup neutron/proton cross section sets. Such cross sections sets are now being developed at Los Alamos. Using these cross sections we have carried out coupled proton/neutron transport calculations using both the S{sub N} and Monte Carlo methods. The S{sub N} calculations used a code called SMARTEPANTS (simulating many accumulative Rutherford trajectories, electron, proton and neutral transport slover) while the Monte Carlo calculations are done with the multigroup option of the MCNP code. Both SMARTEPANTS and MCNP require standard multigroup cross section libraries. HETC on the other hand, avoids the need for precalculated nuclear cross sections by modeling individual nucleon collisions as the transporting neutrons and protons interact with nuclei. 21 refs., 1 fig.

Filippone, W.L. (Arizona Univ., Tucson, AZ (USA). Dept. of Nuclear and Energy Engineering); Little, R.C.; Morel, J.E.; MacFarlane, R.E.; Young, P.G. (Los Alamos National Lab., NM (USA))

1991-01-01

217

Quantum Monte Carlo method for the ground state of many-boson systems

We formulate a quantum Monte Carlo (QMC) method for calculating the ground state of many-boson systems. The method is based on a field-theoretical approach, and is closely related to existing fermion auxiliary-field QMC methods which are applied in several fields of physics. The ground-state projection is implemented as a branching random walk in the space of permanents consisting of identical single-particle orbitals. Any single-particle basis can be used, and the method is in principle exact. We illustrate this method with a trapped atomic boson gas, where the atoms interact via an attractive or repulsive contact two-body potential. We choose as the single-particle basis a real-space grid. We compare with exact results in small systems and arbitrarily sized systems of untrapped bosons with attractive interactions in one dimension, where analytical solutions exist. We also compare with the corresponding Gross-Pitaevskii (GP) mean-field calculations for trapped atoms, and discuss the close formal relation between our method and the GP approach. Our method provides a way to systematically improve upon GP while using the same framework, capturing interaction and correlation effects with a stochastic, coherent ensemble of noninteracting solutions. We discuss various algorithmic issues, including importance sampling and the back-propagation technique for computing observables, and illustrate them with numerical studies. We show results for systems with up to N{approx}400 bosons.

Purwanto, Wirawan; Zhang Shiwei [Department of Physics, The College of William and Mary, Williamsburg, Virginia 23187 (United States)

2004-11-01

218

Determination of phase equilibria in confined systems by open pore cell Monte Carlo method.

We present a modification of the molecular dynamics simulation method with a unit pore cell with imaginary gas phase [M. Miyahara, T. Yoshioka, and M. Okazaki, J. Chem. Phys. 106, 8124 (1997)] designed for determination of phase equilibria in nanopores. This new method is based on a Monte Carlo technique and it combines the pore cell, opened to the imaginary gas phase (open pore cell), with a gas cell to measure the equilibrium chemical potential of the confined system. The most striking feature of our new method is that the confined system is steadily led to a thermodynamically stable state by forming concave menisci in the open pore cell. This feature of the open pore cell makes it possible to obtain the equilibrium chemical potential with only a single simulation run, unlike existing simulation methods, which need a number of additional runs. We apply the method to evaluate the equilibrium chemical potentials of confined nitrogen in carbon slit pores and silica cylindrical pores at 77 K, and show that the results are in good agreement with those obtained by two conventional thermodynamic integration methods. Moreover, we also show that the proposed method can be particularly useful for determining vapor-liquid and vapor-solid coexistence curves and the triple point of the confined system. PMID:23464174

Miyahara, Minoru T; Tanaka, Hideki

2013-02-28

219

A modular method to handle multiple time-dependent quantities in Monte Carlo simulations

NASA Astrophysics Data System (ADS)

A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method.

Shin, J.; Perl, J.; Schümann, J.; Paganetti, H.; Faddegon, B. A.

2012-06-01

220

Hybrid monte carlo method for simulation of two-component aerosol coagulation and phase segregation.

The paper presents the development of a hybrid Monte Carlo (MC) method for the simulation of the simultaneous coagulation and phase segregation of an immiscible two-component binary aerosol. The model is intended to qualitatively model our prior studies of the synthesis of mixed metal oxides for which phase-segregated domains have been observed in molten nanodroplets. In our previous works (J. Aerosol Sci.32, 1479 (2001); Chem. Eng. Sci.56, 5763 (2001); submitted for publication) we developed sectional and monodisperse models where the internal state of the aerosol particles was described. These methods have certain limitations and it is difficult to include additional physical effects into the framework. Our new approach combines both constant volume and constant number Monte Carlo methods. Similar to our previous models, we assume that the phase segregation is kinetically controlled. The MC approach allows us to compute the mean number of enclosures (minor phase) per droplet, average enclosure volume, and the width of the enclosure size distribution. The results show that asymptotic behavior of enclosure distribution exists that is independent of initial conditions, which is very close to the continuum self-preserving distribution. Temperature is a key parameter because it allows for a significant change in the internal transport rate within each droplet. In particular, increasing the temperature significantly enhances the Brownian coagulation rate and lowers the number of enclosures per droplet. As a result, the MC results indicate that the growth of the minor phase can be moderated quite dramatically by small changes in system temperature. These results serve to illustrate the utility of this synthesis approach to the controlled growth of nanoparticles through the use of a majority matrix to slow down the encounter frequency of the minor phase and therefore its particle size. PMID:16290566

Efendiev, Y; Zachariah, M R

2002-05-01

221

A modular method to handle multiple time-dependent quantities in Monte Carlo simulations.

A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call 'Time Features'. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method. PMID:22572201

Shin, J; Perl, J; Schümann, J; Paganetti, H; Faddegon, B A

2012-06-01

222

This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which attempts to achieve uniform statistical uncertainty throughout a designated problem space. The MC DD development is being implemented in conjunction with the Denovo deterministic radiation transport package to have direct access to the 3-D, massively parallel discrete-ordinates solver (to support the hybrid method) and the associated parallel routines and structure. This paper describes the hybrid method, its implementation, and initial testing results for a realistic 2-D quarter core pressurized-water reactor model and also describes the MC DD algorithm and its implementation.

Wagner, John C [ORNL; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL; Peplow, Douglas E. [ORNL; Turner, John A [ORNL

2011-01-01

223

This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform ''real'' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the ''gold standard'' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which attempts to achieve uniform statistical uncertainty throughout a designated problem space. The MC DD development is being implemented in conjunction with the Denovo deterministic radiation transport package to have direct access to the 3-D, massively parallel discrete-ordinates solver (to support the hybrid method) and the associated parallel routines and structure. This paper describes the hybrid method, its implementation, and initial testing results for a realistic 2-D quarter core pressurized-water reactor model and also describes the MC DD algorithm and its implementation.

Wagner, John C [ORNL] [ORNL; Mosher, Scott W [ORNL] [ORNL; Evans, Thomas M [ORNL] [ORNL; Peplow, Douglas E. [ORNL] [ORNL; Turner, John A [ORNL] [ORNL

2010-01-01

224

Monte Carlo (MC) simulations are frequently used to simulate the radial distribution of remitted fluorescence light from tissue surfaces upon pencil beam excitation to gather information about influences of different tissue parameters. Here, the "weighted direct emission method" (WDEM) is proposed, which uses a weighted MC simulation approach for both excitation and fluorescence photons, and is compared to four other methods in terms of accuracy and speed, and using a broad range of tissue-relevant optical parameters. The WDEM is 5.2× faster on average than a fixed weight MC approach while still preserving its accuracy. Additional gain of speed can be achieved by implementing it on graphics processing units. PMID:23400069

Hennig, Georg; Stepp, Herbert; Sroka, Ronald; Beyer, Wolfgang

2013-02-10

225

We present the conceptual and formal simplifications of the recently developed corrected effective medium (CEM) theory that enable this theory to be used directly in molecular dynamics (MD) and Monte Carlo (MC) simulations of large systems, hence the acronym MD/MC-CEM. The essential idea involves adjustment of the CEM embedding functions to include {ital approximately} the original explicit correction for kinetic-exchange-correlation energy differences between the real system and the many atom--jellium systems used as the zeroth order model. Examples of this construction are provided for the Ni, Pd, Ar, and H/Pd(111) systems. Finally, a few brief applications of this method to large systems are provided. These include relaxation of metal surfaces, structure of pure Ni and mixed NiCu clusters, sticking of Cu on Cu(100), and the scattering of Ar from H covered Pd(111).

Stave, M.S.; Sanders, D.E.; Raeker, T.J.; DePristo, A.E. (Ames Laboratory, USDOE, Ames, IA (USA) Department of Chemistry, Iowa State University, Ames, IA (USA))

1990-09-15

226

A Monte Carlo Method for Modeling Thermal Damping: Beyond the Brownian-Motion Master Equation

The "standard" Brownian motion master equation, used to describe thermal damping, is not completely positive, and does not admit a Monte Carlo method, important in numerical simulations. To eliminate both these problems one must add a term that generates additional position diffusion. He we show that one can obtain a completely positive simple quantum Brownian motion, efficiently solvable, without any extra diffusion. This is achieved by using a stochastic Schroedinger equation (SSE), closely analogous to Langevin's equation, that has no equivalent Markovian master equation. Considering a specific example, we show that this SSE is sensitive to nonlinearities in situations in which the master equation is not, and may therefore be a better model of damping for nonlinear systems.

Kurt Jacobs

2008-07-26

227

Calculation of channel flows of gases by the Monte-Carlo method with correction for collisions

NASA Astrophysics Data System (ADS)

Steady flow of a moderately rarefied gas in a circularly cylindrical channel was considered for the case of the flow being produced by the interaction of two flows colliding head-on. An iterative procedure based on the Monte Carlo method was used to solve the five-dimensional equations describing molecules with an arbitrary collisional cross section dependent on the relative velocity of a colliding pair. A Maxwellian initial distribution of the molecules was assumed, and various collision velocities were examined. A total of 4000-5000 trial trajectories were run to attain a 95% level of accuracy. The countercurrent flows were found to diffuse independent of one another. Transmissivity (Clausing) coefficients were calculated in terms of the Knudsen and Mach numbers, the channel length, the diffusion coefficient, and density and temperature distributions along the channel. The quantitative results are considered significant for calibrating pitot tube measurements in a transition zone.

Bazarnova, N. M.

1981-04-01

228

Monte Carlo study of living polymers with the bond-fluctuation method

NASA Astrophysics Data System (ADS)

The highly efficient bond-fluctuation method for Monte Carlo simulations of both static and dynamic properties of polymers is applied to a system of living polymers. Parallel to stochastic movements of monomers, which result in Rouse dynamics of the macromolecules, the polymer chains break, or associate at chain ends with other chains and single monomers, in the process of equilibrium polymerization. We study the changes in equilibrium properties, such as molecular-weight distribution, average chain length, and radius of gyration, and specific heat with varying density and temperature of the system. The results of our numeric experiments indicate a very good agreement with the recently suggested description in terms of the mean-field approximation. The coincidence of the specific heat maximum position at kBT=V/4 in both theory and simulation suggests the use of calorimetric measurements for the determination of the scission-recombination energy V in real experiments.

Rouault, Yannick; Milchev, Andrey

1995-06-01

229

Importance Sampling and Adjoint Hybrid Methods in Monte Carlo Transport with Reflecting Boundaries

Adjoint methods form a class of importance sampling methods that are used to accelerate Monte Carlo (MC) simulations of transport equations. Ideally, adjoint methods allow for zero-variance MC estimators provided that the solution to an adjoint transport equation is known. Hybrid methods aim at (i) approximately solving the adjoint transport equation with a deterministic method; and (ii) use the solution to construct an unbiased MC sampling algorithm with low variance. The problem with this approach is that both steps can be prohibitively expensive. In this paper, we simplify steps (i) and (ii) by calculating only parts of the adjoint solution. More specifically, in a geometry with limited volume scattering and complicated reflection at the boundary, we consider the situation where the adjoint solution "neglects" volume scattering, whereby significantly reducing the degrees of freedom in steps (i) and (ii). A main application for such a geometry is in remote sensing of the environment using physics-based signal models. Volume scattering is then incorporated using an analog sampling algorithm (or more precisely a simple modification of analog sampling called a heuristic sampling algorithm) in order to obtain unbiased estimators. In geometries with weak volume scattering (with a domain of interest of size comparable to the transport mean free path), we demonstrate numerically significant variance reductions and speed-ups (figures of merit).

Guillaume Bal; Ian Langmore

2011-04-13

230

A First-Passage Kinetic Monte Carlo method for reaction-drift-diffusion processes

NASA Astrophysics Data System (ADS)

Stochastic reaction-diffusion models are now a popular tool for studying physical systems in which both the explicit diffusion of molecules and noise in the chemical reaction process play important roles. The Smoluchowski diffusion-limited reaction model (SDLR) is one of several that have been used to study biological systems. Exact realizations of the underlying stochastic processes described by the SDLR model can be generated by the recently proposed First-Passage Kinetic Monte Carlo (FPKMC) method. This exactness relies on sampling analytical solutions to one and two-body diffusion equations in simplified protective domains. In this work we extend the FPKMC to allow for drift arising from fixed, background potentials. As the corresponding Fokker-Planck equations that describe the motion of each molecule can no longer be solved analytically, we develop a hybrid method that discretizes the protective domains. The discretization is chosen so that the drift-diffusion of each molecule within its protective domain is approximated by a continuous-time random walk on a lattice. New lattices are defined dynamically as the protective domains are updated, hence we will refer to our method as Dynamic Lattice FPKMC or DL-FPKMC. We focus primarily on the one-dimensional case in this manuscript, and demonstrate the numerical convergence and accuracy of our method in this case for both smooth and discontinuous potentials. We also present applications of our method, which illustrate the impact of drift on reaction kinetics.

Mauro, Ava J.; Sigurdsson, Jon Karl; Shrake, Justin; Atzberger, Paul J.; Isaacson, Samuel A.

2014-02-01

231

NASA Astrophysics Data System (ADS)

The errors can cause the serious loss of the performance of a precision machine system. In this paper, we propose the method of allocating the alignment tolerances of the components and apply this method to Confocal Scanning Microscopy (CSM) to get the optimal tolerances. CSM uses confocal aperture, which blocks the out-of-focus information. Thus, it provides images with superior resolution and has unique property of optical sectioning. Recently, due to these properties, it has been widely used for measurement in biological field, medical science, material science and semiconductor industry. In general, tight tolerances are required to maintain the performance of a system, but a high cost of manufacturing and assembling is required to preserve the tight tolerances. The purpose of allocating the optimal tolerances is minimizing the cost while keeping the performance of the system. In the optimal problem, we set the performance requirements as constraints and maximized the tolerances. The Monte Carlo Method, a statistical simulation method, is used in tolerance analysis. Alignment tolerances of optical components of the confocal scanning microscopy are optimized, to minimize the cost and to maintain the observation performance of the microscopy. We can also apply this method to the other precision machine system.

Yoo, Hongki; Kang, Dong-Kyun; Lee, SeungWoo; Lee, Junhee; Gweon, Dae-Gab

2004-07-01

232

Uniform-acceptance force-bias Monte Carlo method with time scale to study solid-state diffusion

NASA Astrophysics Data System (ADS)

Monte Carlo (MC) methods have a long-standing history as partners of molecular dynamics (MD) to simulate the evolution of materials at the atomic scale. Among these techniques, the uniform-acceptance force-bias Monte Carlo (UFMC) method [G. Dereli, Mol. Simul.10.1080/08927029208022490 8, 351 (1992)] has recently attracted attention [M. Timonova , Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.81.144107 81, 144107 (2010)] thanks to its apparent capacity of being able to simulate physical processes in a reduced number of iterations compared to classical MD methods. The origin of this efficiency remains, however, unclear. In this work we derive a UFMC method starting from basic thermodynamic principles, which leads to an intuitive and unambiguous formalism. The approach includes a statistically relevant time step per Monte Carlo iteration, showing a significant speed-up compared to MD simulations. This time-stamped force-bias Monte Carlo (tfMC) formalism is tested on both simple one-dimensional and three-dimensional systems. Both test-cases give excellent results in agreement with analytical solutions and literature reports. The inclusion of a time scale, the simplicity of the method, and the enhancement of the time step compared to classical MD methods make this method very appealing for studying the dynamics of many-particle systems.

Mees, Maarten J.; Pourtois, Geoffrey; Neyts, Erik C.; Thijsse, Barend J.; Stesmans, André

2012-04-01

233

To evaluate the bootstrap current in nonaxisymmetric toroidal plasmas quantitatively, a {delta}f Monte Carlo method is incorporated into the moment approach. From the drift-kinetic equation with the pitch-angle scattering collision operator, the bootstrap current and neoclassical conductivity coefficients are calculated. The neoclassical viscosity is evaluated from these two monoenergetic transport coefficients. Numerical results obtained by the {delta}f Monte Carlo method for a model heliotron are in reasonable agreement with asymptotic formulae and with the results obtained by the variational principle.

Matsuyama, A. [Graduate School of Energy Science, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Isaev, M. Yu. [Nuclear Fusion Institute, RRC Kurchatov Institute, 123182 Moscow (Russian Federation); Watanabe, K. Y.; Suzuki, Y.; Nakajima, N. [National Institute for Fusion Science, Toki, Gifu 509-5292 (Japan); Hanatani, K. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Cooper, W. A.; Tran, T. M. [Centre de Recherches en Physique des Plasmas, Association Euratom-Suisse, Ecole Polytechnique Federale de Lausanne, CH1015 Lausanne (Switzerland)

2009-05-15

234

Bayesian analysis of LISA data sets based on Markov chain Monte Carlo methods has been shown to be a challenging problem, in part due to the complicated structure of the likelihood function consisting of several isolated local maxima that dramatically reduces the efficiency of the sampling techniques. Here we introduce a new fully Markovian algorithm, a Delayed Rejection Metropolis-Hastings Markov chain Monte Carlo method, to efficiently explore these kind of structures and we demonstrate its performance on selected LISA data sets containing a known number of stellar-mass binary signals embedded in Gaussian stationary noise.

Miquel Trias; Alberto Vecchio; John Veitch

2009-05-18

235

ATR WG-MOX Fuel Pellet Burnup Measurement by Monte Carlo - Mass Spectrometric Method

This paper presents a new method for calculating the burnup of nuclear reactor fuel, the MCWO-MS method, and describes its application to an experiment currently in progress to assess the suitability for use in light-water reactors of Mixed-OXide (MOX) fuel that contains plutonium derived from excess nuclear weapons material. To demonstrate that the available experience base with Reactor-Grade Mixed uranium-plutonium OXide (RGMOX) can be applied to Weapons-Grade (WG)-MOX in light water reactors, and to support potential licensing of MOX fuel made from weapons-grade plutonium and depleted uranium for use in United States reactors, an experiment containing WG-MOX fuel is being irradiated in the Advanced Test Reactor (ATR) at the Idaho National Engineering and Environmental Laboratory. Fuel burnup is an important parameter needed for fuel performance evaluation. For the irradiated MOX fuel’s Post-Irradiation Examination, the 148Nd method is used to measure the burnup. The fission product 148Nd is an ideal burnup indicator, when appropriate correction factors are applied. In the ATR test environment, the spectrum-dependent and burnup-dependent correction factors (see Section 5 for detailed discussion) can be substantial in high fuel burnup. The validated Monte Carlo depletion tool (MCWO) used in this study can provide a burnup-dependent correction factor for the reactor parameters, such as capture-to-fission ratios, isotopic concentrations and compositions, fission power, and spectrum in a straightforward fashion. Furthermore, the correlation curve generated by MCWO can be coupled with the 239Pu/Pu ratio measured by a Mass Spectrometer (in the new MCWO-MS method) to obtain a best-estimate MOX fuel burnup. A Monte Carlo - MCWO method can eliminate the generation of few-group cross sections. The MCWO depletion tool can analyze the detailed spatial and spectral self-shielding effects in UO2, WG-MOX, and reactor-grade mixed oxide (RG-MOX) fuel pins. The MCWO-MS tool only needs the MS-measured 239Pu/Pu ratio, without the measured isotope 148Nd concentration data, to determine the burnup accurately. MCWO-MS not only provided linear heat generation rate, Pu isotopic composition versus burnup, and burnup distributions within the WG-MOX fuel capsules, but also correctly pointed out the inconsistency in the large difference in burnups obtained by the 148Nd method.

Chang, Gray Sen I

2002-10-01

236

Uncertainty Quantification of Prompt Fission Neutron Spectra Using the Unified Monte Carlo Method

NASA Astrophysics Data System (ADS)

In the ENDF/B-VII.1 nuclear data library, the existing covariance evaluations of the prompt fission neutron spectra (PFNS) were computed by combining the available experimental differential data with theoretical model calculations, relying on the use of a first-order linear Bayesan approach, the Kalman filter. This approach assumes that the theoretical model response to changes in input model parameters be linear about the a priori central values. While the Unified Monte Carlo (UMC) method remains a Bayesian approach, like the Kalman filter, this method does not make any assumption about the linearity of the model response or shape of the a posteriori distribution of the parameters. By sampling from a distribution centered about the a priori model parameters, the UMC method computes the moments of the a posteriori parameter distribution. As the number of samples increases, the statistical noise in the computed a posteriori moments decrease and an appropriately converged solution corresponding to the true mean of the a posteriori PDF results. The UMC method has been successfully implemented using both a uniform and Gaussian sampling distribution and has been used for the evaluation of the PFNS and its associated uncertainties. While many of the UMC results are similar to the first-order Kalman filter results, significant differences are shown when experimental data are excluded from the evaluation process. When experimental data are included a few small nonlinearities are present in the high outgoing energy tail of the PFNS.

Rising, M. E.; Talou, P.; Prinja, A. K.

2014-04-01

237

Low-Density Nozzle Flow by the Direct Simulation Monte Carlo and Continuum Methods

NASA Technical Reports Server (NTRS)

Two different approaches, the direct simulation Monte Carlo (DSMC) method based on molecular gasdynamics, and a finite-volume approximation of the Navier-Stokes equations, which are based on continuum gasdynamics, are employed in the analysis of a low-density gas flow in a small converging-diverging nozzle. The fluid experiences various kinds of flow regimes including continuum, slip, transition, and free-molecular. Results from the two numerical methods are compared with Rothe's experimental data, in which density and rotational temperature variations along the centerline and at various locations inside a low-density nozzle were measured by the electron-beam fluorescence technique. The continuum approach showed good agreement with the experimental data as far as density is concerned. The results from the DSMC method showed good agreement with the experimental data, both in the density and the rotational temperature. It is also shown that the simulation parameters, such as the gas/surface interaction model, the energy exchange model between rotational and translational modes, and the viscosity-temperature exponent, have substantial effects on the results of the DSMC method.

Chung, Chang-Hong; Kim, Sku C.; Stubbs, Robert M.; Dewitt, Kenneth J.

1994-01-01

238

An energy transfer method for 4D Monte Carlo dose calculation

This article presents a new method for four-dimensional Monte Carlo dose calculations which properly addresses dose mapping for deforming anatomy. The method, called the energy transfer method (ETM), separates the particle transport and particle scoring geometries: Particle transport takes place in the typical rectilinear coordinate system of the source image, while energy deposition scoring takes place in a desired reference image via use of deformable image registration. Dose is the energy deposited per unit mass in the reference image. ETM has been implemented into DOSXYZnrc and compared with a conventional dose interpolation method (DIM) on deformable phantoms. For voxels whose contents merge in the deforming phantom, the doses calculated by ETM are exactly the same as an analytical solution, contrasting to the DIM which has an average 1.1% dose discrepancy in the beam direction with a maximum error of 24.9% found in the penumbra of a 6 MV beam. The DIM error observed persists even if voxel subdivision is used. The ETM is computationally efficient and will be useful for 4D dose addition and benchmarking alternative 4D dose addition algorithms. PMID:18841862

Siebers, Jeffrey V.; Zhong, Hualiang

2008-01-01

239

Fast perturbation Monte Carlo method for photon migration in heterogeneous turbid media

We present a two-step Monte Carlo (MC) method that is used to solve the radiative transfer equation in heterogeneous turbid media. The method exploits the one-to-one correspondence between the seed value of a random number generator and the sequence of random numbers. In the first step, a full MC simulation is run for the initial distribution of the optical properties and the “good” seeds (the ones leading to detected photons) are stored in an array. In the second step, we run a new MC simulation with only the good seeds stored in the first step, i.e., we propagate only detected photons. The effect of a change in the optical properties is calculated in a short time by using two scaling relationships. By this method we can increase the speed of a simulation up to a factor of 1300 in typical situations found in near-IR tissue spectroscopy and diffuse optical tomography, with a minimal requirement for hard disk space. Potential applications of this method for imaging of turbid media and the inverse problem are discussed. PMID:21633460

Sassaroli, Angelo

2012-01-01

240

Monte Carlo simulations of primitive models for ionic systems using the Wolf method

NASA Astrophysics Data System (ADS)

Thermodynamic and structural properties of primitive models for electrolyte solutions and molten salts were studied using NVT and NPT Monte Carlo simulations. The Coulombic interactions were simulated using the Wolf method [D. Wolf, Phys. Rev. Lett. 68, 3315 (1992); D. Wolf, P. Keblinnski, S. R. Phillpot, and J. Eggebrecht, J. Chem. Phys. 110, 8254 (1999)]. Results for 1 : 1 and 2 : 1 charge ratio electroneutral systems are presented, using the restricted and non-restricted primitive models, as well as a soft PM pair potential for a monovalent salt [J.-P. Hansen and I. R. McDonald, Phys. Rev. A 11, 2111 (1975)] that has also been used to model 2 : 12 and 1 : 20 asymmetric colloidal systems, with size ratios 1 : 10 and 2 : 15, respectively [B. Hribar, Y. V. Kalyuzhnyi, and V. Vlachy, Molec. Phys. 87, 1317 (1996)]. We present the predictions obtained for these systems using the Wolf method. Our results are in very good agreement with simulation data obtained with the Ewald sum method as well as with integral-equation theories results. We discuss the relevance of the Wolf method in the context of variable-ranged potentials in molecular thermodynamic theories for complex fluids.

Avendaño, Carlos; Gil-Villegas, Alejandro

241

At the present time a Monte Carlo transport computer code is being designed and implemented at Lawrence Livermore National Laboratory to include the transport of: neutrons, photons, electrons and light charged particles as well as the coupling between all species of particles, e.g., photon induced electron emission. Since this code is being designed to handle all particles this approach is called the ''All Particle Method''. The code is designed as a test bed code to include as many different methods as possible (e.g., electron single or multiple scattering) and will be data driven to minimize the number of methods and models ''hard wired'' into the code. This approach will allow changes in the Livermore nuclear and atomic data bases, used to described the interaction and production of particles, to be used to directly control the execution of the program. In addition this approach will allow the code to be used at various levels of complexity to balance computer running time against the accuracy requirements of specific applications. This paper describes the current design philosophy and status of the code. Since the treatment of neutrons and photons used by the All Particle Method code is more or less conventional, emphasis in this paper is placed on the treatment of electron, and to a lesser degree charged particle, transport. An example is presented in order to illustrate an application in which the ability to accurately transport electrons is important. 21 refs., 1 fig.

Cullen, D.E.; Perkins, S.T.; Plechaty, E.F.; Rathkopf, J.A.

1988-06-01

242

MONTE CARLO EXTENSION OF QUASIMONTE CARLO Art B. Owen

MONTE CARLO EXTENSION OF QUASIÂMONTE CARLO Art B. Owen Department of Statistics Stanford University Stanford CA 94305, U.S.A. ABSTRACT This paper surveys recent research on using Monte Carlo techniques to improve quasiÂMonte Carlo techÂ niques. Randomized quasiÂMonte Carlo methods proÂ vide a basis for error

Owen, Art

243

Statistical Properties of Nuclei by the Shell Model Monte Carlo Method

We use quantum Monte Carlo methods in the framework of the interacting nuclear shell model to calculate the statistical properties of nuclei at finite temperature and/or excitation energies. With this approach we can carry out realistic calculations in much larger configuration spaces than are possible by conventional methods. A major application of the methods has been the microscopic calculation of nuclear partition functions and level densities, taking into account both correlations and shell effects. Our results for nuclei in the mass region A ~ 50 - 70 are in remarkably good agreement with experimental level densities without any adjustable parameters and are an improvement over empirical formulas. We have recently extended the shell model theory of level statistics to higher temperatures, including continuum effects. We have also constructed simple statistical models to explain the dependence of the microscopically calculated level densities on good quantum numbers such as parity. Thermal signatures of pairing correlations are identified through odd-even effects in the heat capacity.

Y. Alhassid

2006-04-26

244

Hypothetical scanning Monte Carlo (HSMC) is a method for calculating the absolute entropy S and free energy F from a given MC trajectory developed recently and applied to liquid argon, TIP3P water, and peptides. In this paper HSMC is extended to random coil polymers by applying it to self-avoiding walks on a square lattice--a simple but difficult model due to strong excluded volume interactions. With HSMC the probability of a given chain is obtained as a product of transition probabilities calculated for each bond by MC simulations and a counting formula. This probability is exact in the sense that it is based on all the interactions of the system and the only approximation is due to finite sampling. The method provides rigorous upper and lower bounds for F, which can be obtained from a very small sample and even from a single chain conformation. HSMC is independent of existing techniques and thus constitutes an independent research tool. The HSMC results are compared to those obtained by other methods, and its application to complex lattice chain models is discussed; we emphasize its ability to treat any type of boundary conditions for which a reference state (with known free energy) might be difficult to define for a thermodynamic integration process. Finally, we stress that the capability of HSMC to extract the absolute entropy from a given sample is important for studying relaxation processes, such as protein folding. PMID:16356071

White, Ronald P; Meirovitch, Hagai

2005-12-01

245

Density-of-states based Monte Carlo methods for simulation of biological systems

NASA Astrophysics Data System (ADS)

We have developed density-of-states [1] based Monte Carlo techniques for simulation of biological molecules. Two such methods are discussed. The first, Configurational Temperature Density of States (CTDOS) [2], relies on computing the density of states of a peptide system from knowledge of its configurational temperature. The reciprocal of this intrinsic temperature, computed from instantaneous configurational information of the system, is integrated to arrive at the density of states. The method shows improved efficiency and accuracy over techniques that are based on histograms of random visits to distinct energy states. The second approach, Expanded Ensemble Density of States (EXEDOS), incorporates elements from both the random walk method and the expanded ensemble formalism. It is used in this work to study mechanical deformation of model peptides. Results are presented in the form of force-extension curves and the corresponding potentials of mean force. The application of this proposed technique is further generalized to other biological systems; results will be presented for ion transport through protein channels, base stacking in nucleic acids and hybridization of DNA strands. [1]. F. Wang and D. P. Landau, Phys. Rev. Lett., 86, 2050 (2001). [2]. N. Rathore, T. A. Knotts IV and J. J. de Pablo, Biophys. J., Dec. (2003).

Rathore, Nitin; Knotts, Thomas A.; de Pablo, Juan J.

2004-03-01

246

Exact ground state Monte Carlo method for Bosons without importance sampling

Generally ``exact'' quantum Monte Carlo computations for the ground state of many bosons make use of importance sampling. The importance sampling is based either on a guiding function or on an initial variational wave function. Here we investigate the need of importance sampling in the case of path integral ground state (PIGS) Monte Carlo. PIGS is based on a discrete

M. Rossi; M. Nava; L. Reatto; D. E. Galli

2009-01-01

247

We examine Gaussian-basis Monte Carlo (GBMC) method introduced by Corney and Drummond. This method is based on an expansion of the density-matrix operator \\\\hatrho by means of the coherent Gaussian-type operator basis \\\\hatLambda and does not suffer from the minus sign problem. The original method, however, often fails in reproducing the true ground state and causes systematic errors of calculated

Takeshi Aimi; Masatoshi Imada

2007-01-01

248

On Monte Carlo and molecular dynamics methods inspired by Tsallis statistics: Methodology a generalized statistical distribution derived from a modification of the GibbsÂShannon entropy proposed of the phase space may result in distinct time averages. Statistical theories of chemical sys- tems are often

Straub, John E.

249

A honeycomb probe was designed to measure the optical properties of biological tissues using single Monte Carlo method. The ongoing project is intended to be a multi-wavelength, real time, and in-vivo technique to detect breast cancer. Preliminary...

Bendele, Travis Henry

2013-02-22

250

This paper presents the development of a tracking algorithm for multi-sensor single target tracking in the presence of asynchronous or missing measurements and high clutter levels. The algorithm is based upon the random sample representation of state PDFs and uses sequential Monte Carlo or “particle” filtering methods to perform prediction and update. The performance of the algorithm is illustrated on

Alan D. Marrs

2001-01-01

251

This paper presents systematic developments in the previously initiated line of research concerning a quantum Monte Carlo (QMC) method based on the use of a pure diffusion process corresponding to some reference function and a generalized Feynman–Kac path integral formalism. Not only mean values of quantum observables, but also response properties are expressed using suitable path integrals involving the diffusion

Michel Caffarel; Pierre Claverie

1988-01-01

252

Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications. PMID:24800943

Farah, J; Martinetti, F; Sayah, R; Lacoste, V; Donadille, L; Trompier, F; Nauraye, C; De Marzi, L; Vabre, I; Delacroix, S; Hérault, J; Clairand, I

2014-06-01

253

NASA Astrophysics Data System (ADS)

Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.

Farah, J.; Martinetti, F.; Sayah, R.; Lacoste, V.; Donadille, L.; Trompier, F.; Nauraye, C.; De Marzi, L.; Vabre, I.; Delacroix, S.; Hérault, J.; Clairand, I.

2014-06-01

254

NASA Astrophysics Data System (ADS)

A reverse Monte Carlo (RMC) method is developed to obtain the energy loss function (ELF) and optical constants from a measured reflection electron energy-loss spectroscopy (REELS) spectrum by an iterative Monte Carlo (MC) simulation procedure. The method combines the simulated annealing method, i.e., a Markov chain Monte Carlo (MCMC) sampling of oscillator parameters, surface and bulk excitation weighting factors, and band gap energy, with a conventional MC simulation of electron interaction with solids, which acts as a single step of MCMC sampling in this RMC method. To examine the reliability of this method, we have verified that the output data of the dielectric function are essentially independent of the initial values of the trial parameters, which is a basic property of a MCMC method. The optical constants derived for SiO2 in the energy loss range of 8-90 eV are in good agreement with other available data, and relevant bulk ELFs are checked by oscillator strength-sum and perfect-screening-sum rules. Our results show that the dielectric function can be obtained by the RMC method even with a wide range of initial trial parameters. The RMC method is thus a general and effective method for determining the optical properties of solids from REELS measurements.

Da, B.; Sun, Y.; Mao, S. F.; Zhang, Z. M.; Jin, H.; Yoshikawa, H.; Tanuma, S.; Ding, Z. J.

2013-06-01

255

A Sequential Monte Carlo Method for Bayesian Analysis of Massive Datasets.

Markov chain Monte Carlo (MCMC) techniques revolutionized statistical practice in the 1990s by providing an essential toolkit for making the rigor and flexibility of Bayesian analysis computationally practical. At the same time the increasing prevalence of massive datasets and the expansion of the field of data mining has created the need for statistically sound methods that scale to these large problems. Except for the most trivial examples, current MCMC methods require a complete scan of the dataset for each iteration eliminating their candidacy as feasible data mining techniques.In this article we present a method for making Bayesian analysis of massive datasets computationally feasible. The algorithm simulates from a posterior distribution that conditions on a smaller, more manageable portion of the dataset. The remainder of the dataset may be incorporated by reweighting the initial draws using importance sampling. Computation of the importance weights requires a single scan of the remaining observations. While importance sampling increases efficiency in data access, it comes at the expense of estimation efficiency. A simple modification, based on the "rejuvenation" step used in particle filters for dynamic systems models, sidesteps the loss of efficiency with only a slight increase in the number of data accesses.To show proof-of-concept, we demonstrate the method on two examples. The first is a mixture of transition models that has been used to model web traffic and robotics. For this example we show that estimation efficiency is not affected while offering a 99% reduction in data accesses. The second example applies the method to Bayesian logistic regression and yields a 98% reduction in data accesses. PMID:19789656

Ridgeway, Greg; Madigan, David

2003-07-01

256

Building proteins from C alpha coordinates using the dihedral probability grid Monte Carlo method.

Dihedral probability grid Monte Carlo (DPG-MC) is a general-purpose method of conformational sampling that can be applied to many problems in peptide and protein modeling. Here we present the DPG-MC method and apply it to predicting complete protein structures from C alpha coordinates. This is useful in such endeavors as homology modeling, protein structure prediction from lattice simulations, or fitting protein structures to X-ray crystallographic data. It also serves as an example of how DPG-MC can be applied to systems with geometric constraints. The conformational propensities for individual residues are used to guide conformational searches as the protein is built from the amino-terminus to the carboxyl-terminus. Results for a number of proteins show that both the backbone and side chain can be accurately modeled using DPG-MC. Backbone atoms are generally predicted with RMS errors of about 0.5 A (compared to X-ray crystal structure coordinates) and all atoms are predicted to an RMS error of 1.7 A or better. PMID:7549885

Mathiowetz, A. M.; Goddard, W. A.

1995-01-01

257

Cu-Au Alloys Using Monte Carlo Simulations and the BFS Method for Alloys

NASA Technical Reports Server (NTRS)

Semi empirical methods have shown considerable promise in aiding in the calculation of many properties of materials. Materials used in engineering applications have defects that occur for various reasons including processing. In this work we present the first application of the BFS method for alloys to describe some aspects of microstructure due to processing for the Cu-Au system (Cu-Au, CuAu3, and Cu3Au). We use finite temperature Monte Carlo calculations, in order to show the influence of 'heat treatment' in the low-temperature phase of the alloy. Although relatively simple, it has enough features that could be used as a first test of the reliability of the technique. The main questions to be answered in this work relate to the existence of low temperature ordered structures for specific concentrations, for example, the ability to distinguish between rather similar phases for equiatomic alloys (CuAu I and CuAu II, the latter characterized by an antiphase boundary separating two identical phases).

Bozzolo, Guillermo; Good, Brian; Ferrante, John

1996-01-01

258

Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods

The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.

Godfrey, Andrew T [ORNL; Gehin, Jess C [ORNL; Bekar, Kursat B [ORNL; Celik, Cihangir [ORNL

2014-01-01

259

Bead-Fourier path-integral Monte Carlo method applied to systems of identical particles

To make the path-integral Monte Carlo (PIMC) method more effective and practical in application to systems of identical particles with strong interactions, we introduce a combined bead-Fourier (BF) PIMC approach with the ordinary bead method and the Fourier PIMC method of Doll and Freeman [J. Chem. Phys. {bold 80}, 2239 (1984); {bold 80}, 5709 (1984)] being its extreme cases. Optimal choice of the number of beads and of Fourier components enables us to reproduce reliably the ground-state energy and electron density distribution in the H atom as well as the exact data for the harmonic oscillator. Applying the BF method to systems of identical particles we use the procedure of simultaneous accounting for all classes of permutations suggested in the previous work [Phys. Rev. A {bold 48}, 4075 (1993)] with subsequent symmetrization of the exchange factor in the weight function to make the sign problem milder. A procedure of random walk in the spin space enables us to obtain spin-dependent averages. We derived exact partition functions and canonical averages for a model system of N noninteracting identical particles (N=2,3,4,{hor_ellipsis}) with the spin (fermions or bosons) in a d-dimensional harmonic field (d=1,2,3) that provided a reliable test of the developed MC procedures. Simulations for N=2,3 reproduce well the exact dependencies. Further simulations showed how gradual switching on of the electrostatic repulsion between particles in this system results in significant weakening of the exchange effects. {copyright} {ital 1997} {ital The American Physical Society}

Vorontsov-Velyaminov, P.N.; Nesvit, M.O.; Gorbunov, R.I. [Faculty of Physics, St. Petersburg State University, 198904, St. Petersburg (Russia)] [Faculty of Physics, St. Petersburg State University, 198904, St. Petersburg (Russia)

1997-02-01

260

Reduced Monte Carlo methods for the solution of stochastic groundwater flow problems

NASA Astrophysics Data System (ADS)

Reduced order modeling is often employed to decrease the computational cost of numerical solutions of parametric Partial Differential Equations. Reduced basis, balanced truncation, projections methods are among the most studied techniques to achieve model reduction. We study the applicability of snapshot-based Proper Orthogonal Decomposition (POD) to Monte Carlo (MC) simulations applied to the solution of the stochastic groundwater flow problem. POD model reduction is obtained by projecting the model equations onto a space generated by a small number of basis functions (principal components). These are obtained upon exploring the solution (probability) space with snapshots, i.e., system states obtained by solving the original process-based equations. The reduced model is then employed to complete the ensemble by adding multiple realizations. We apply this technique to a two dimensional simulation of steady state saturated groundwater flow, and explore the sensitivity of the method to the number of snapshots and associated principal components in terms of accuracy and efficiency of the overall MC procedure. In our preliminary results, we distinguish the problem of heterogeneous recharge, in which the stochastic term is confined to the forcing function (additive stochasticity), from the case of heterogeneous hydraulic conductivity, in which the stochastic term is multiplicative. In the first scenario, the linearity of the problem is fully exploited and the POD approach yields accurate and efficient realizations, leading to substantial speed up of the MC method. The second scenario poses a significant challenge, as the adoption of a few snapshots based on the full model does not provide enough variability in the reduced order replicates, thus leading to poor convergence of the MC method. We find that increasing the number of snapshots improves the convergence of MC but only for large integral scales of the log-conductivity field. The technique is then extended to take full advantage of the solution of moment differential equations of groundwater flow.

Pasetto, D.; Guadagnini, A.; Putti, M.

2012-04-01

261

Quantifying uncertainties in pollutant mapping studies using the Monte Carlo method

NASA Astrophysics Data System (ADS)

Routine air monitoring provides accurate measurements of annual average concentrations of air pollutants, but the low density of monitoring sites limits its capability in capturing intra-urban variation. Pollutant mapping studies measure air pollutants at a large number of sites during short periods. However, their short duration can cause substantial uncertainty in reproducing annual mean concentrations. In order to quantify this uncertainty for existing sampling strategies and investigate methods to improve future studies, we conducted Monte Carlo experiments with nationwide monitoring data from the EPA Air Quality System. Typical fixed sampling designs have much larger uncertainties than previously assumed, and produce accurate estimates of annual average pollution concentrations approximately 80% of the time. Mobile sampling has difficulties in estimating long-term exposures for individual sites, but performs better for site groups. The accuracy and the precision of a given design decrease when data variation increases, indicating challenges in sites intermittently impact by local sources such as traffic. Correcting measurements with reference sites does not completely remove the uncertainty associated with short duration sampling. Using reference sites with the addition method can better account for temporal variations than the multiplication method. We propose feasible methods for future mapping studies to reduce uncertainties in estimating annual mean concentrations. Future fixed sampling studies should conduct two separate 1-week long sampling periods in all 4 seasons. Mobile sampling studies should estimate annual mean concentrations for exposure groups with five or more sites. Fixed and mobile sampling designs have comparable probabilities in ordering two sites, so they may have similar capabilities in predicting pollutant spatial variations. Simulated sampling designs have large uncertainties in reproducing seasonal and diurnal variations at individual sites, but are capable to predict these variations for exposure groups.

Tan, Yi; Robinson, Allen L.; Presto, Albert A.

2014-12-01

262

The application of the Monte-Carlo method to the calculation of radiation exchange processes in combustion systems is discussed. After a brief introduction, the modeling of radiation exchange and the optical properties of the combustion-chamber suspension are described. The application of the method of technical-scale systems is illustrated for large-scale coal- and lignite-fired combustion plants. Flow and heat release are approximated

K. Goerner; U. Dietz

1993-01-01

263

Summary This paper considers simulation-based approaches for the gamma stochastic frontier model. Efficient Markov chain Monte Carlo\\u000a methods are proposed for sampling the posterior distribution of the parameters. Maximum likelihood estimation is also discussed\\u000a based on the stochastic approximation algorithm. The methods are applied to a data set of the U.S. electric utility industry.

Hideo Kozumi; Xingyuan Zhang

2005-01-01

264

The stochastic state point process filter (SSPPF) and steepest descent point process filter (SDPPF) are adaptive filter algorithms for state estimation from point process observations that have been used to track neural receptive field plasticity and to decode the representations of biological signals in ensemble neural spiking activity. The SSPPF and SDPPF are constructed using, respectively, Gaussian and steepest descent approximations to the standard Bayes and Chapman-Kolmogorov (BCK) system of filter equations. To extend these approaches for constructing point process adaptive filters, we develop sequential Monte Carlo (SMC) approximations to the BCK equations in which the SSPPF and SDPPF serve as the proposal densities. We term the two new SMC point process filters SMC-PPFs and SMC-PPFD, respectively. We illustrate the new filter algorithms by decoding the wind stimulus magnitude from simulated neural spiking activity in the cricket cercal system. The SMC-PPFs and SMC-PPFD provide more accurate state estimates at low number of particles than a conventional bootstrap SMC filter algorithm in which the state transition probability density is the proposal density. We also use the SMC-PPFs algorithm to track the temporal evolution of a spatial receptive field of a rat hippocampal neuron recorded while the animal foraged in an open environment. Our results suggest an approach for constructing point process adaptive filters using SMC methods. PMID:17355053

Ergün, Ayla; Barbieri, Riccardo; Eden, Uri T; Wilson, Matthew A; Brown, Emery N

2007-03-01

265

Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis

A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)

Heo, W.; Kim, W.; Kim, Y. [Korea Advanced Institute of Science and Technology - KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of)] [Korea Advanced Institute of Science and Technology - KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of); Yun, S. [Korea Atomic Energy Research Institute - KAERI, 989-111 Daedeok-daero, Yuseong-gu, Daejeon, 305-353 (Korea, Republic of)] [Korea Atomic Energy Research Institute - KAERI, 989-111 Daedeok-daero, Yuseong-gu, Daejeon, 305-353 (Korea, Republic of)

2013-07-01

266

Applications of Monte Carlo methods for the analysis of MHTGR case of the PROTEUS benchmark

Monte Carlo methods, as implemented in the MCNP code, have been used to analyze the neutronics characteristics of benchmarks related to Modular High Temperature Gas-Cooled Reactors. The benchmarks are idealized versions of the Japanes (VHTRC) and Swiss (PROTEUS) facilities and an actual configurations of the PROTEUS Configuration I experiment. The purpose of the unit cell benchmarks is to compare multiplication constants, critical bucklings, migration lengths, reaction rates and spectral indices. The purpose of the full reactors benchmarks is to compare multiplication constants, reaction rates, spectral indices, neutron balances, reaction rates profiles, temperature coefficients of reactivity and effective delayed neutron fractions. All of these parameters can be calculated by MCNP, which can provide a very detailed model of the geometry of the configurations, from fuel particles to entire fuel assemblies, using at the same time a continuous energy model. These characteristics make MCNP a very useful tool to analyze these MHTGR benchmarks. We have used the MCNP latest version, 4.x, eld = 01/12/93 with an ENDF/B-V cross section library. This library does not yet contain temperature dependent resonance materials, so all calculations correspond to room temperature, T = 300{degree}K. Two separate reports were made -- one for the VHTRC, the other for the PROTEUS benchmark.

Difilippo, F.C.

1994-04-01

267

Applications of Monte Carlo methods for the analysis of MHTGR case of the VHTRC benchmark

Monte Carlo methods, as implemented in the MCNP code, have been used to analyze the neutronics characteristics of benchmarks related to Modular High Temperature Gas-Cooled Reactors. The benchmarks are idealized versions of the Japanese (VHTRC) and Swiss (PROTEUS) facilities and an actual configuration of the PROTEUS Configuration 1 experiment. The purpose of the unit cell benchmarks is to compare multiplication constants, critical bucklings, migration lengths, reaction rates and spectral indices. The purpose of the full reactors benchmarks is to compare multiplication constants, reaction rates, spectral indices, neutron balances, reaction rates profiles, temperature coefficients of reactivity and effective delayed neutron fractions. All of these parameters can be calculated by MCNP, which can provide a very detailed model of the geometry of the configurations, from fuel particles to entire fuel assemblies, using at the same time a continuous energy model. These characteristics make MCNP a very useful tool to analyze these MHTGR benchmarks. The author has used the MCNP latest version, 4.x, eld = 01/12/93 with an ENDF/B-V cross section library. This library does not yet contain temperature dependent resonance materials, so all calculations correspond to room temperature, T = 300{degrees}K. Two separate reports were made -- one for the VHTRC, the other for the PROTEUS benchmark.

Difilippo, F.C.

1994-03-01

268

Kinetic Monte Carlo method for dislocation migration in the presence of solute

We present a kinetic Monte Carlo method for simulating dislocation motion in alloys within the framework of the kink model. The model considers the glide of a dislocation in a static, three-dimensional solute atom atmosphere. It includes both a description of the short-range interaction between a dislocation core and the solute and long-range solute-dislocation interactions arising from the interplay of the solute misfit and the dislocation stress field. Double-kink nucleation rates are calculated using a first-passage-time analysis that accounts for the subcritical annihilation of embryonic double kinks as well as the presence of solutes. We explicitly consider the case of the motion of a <111>-oriented screw dislocation on a {l_brace}011{r_brace}-slip plane in body-centered-cubic Mo-based alloys. Simulations yield dislocation velocity as a function of stress, temperature, and solute concentration. The dislocation velocity results are shown to be consistent with existing experimental data and, in some cases, analytical models. Application of this model depends upon the validity of the kink model and the availability of fundamental properties (i.e., single-kink energy, Peierls stress, secondary Peierls barrier to kink migration, single-kink mobility, solute-kink interaction energies, solute misfit), which can be obtained from first-principles calculations and/or molecular-dynamics simulations.

Deo, Chaitanya S.; Srolovitz, David J.; Cai Wei; Bulatov, Vasily V. [Princeton Materials Institute, Princeton University, Princeton, New Jersey 08540 (United States); Department of Materials Science and Engineering, University of Michigan, Ann Arbor, Michigan 48105 (United States); Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, New Jersey 08544 (United States); Princeton Materials Institute, Princeton University, Princeton, New Jersey 08544 (United States); Chemistry and Materials Science Directorate, Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

2005-01-01

269

Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.

Hall, Howard L [ORNL

2012-01-01

270

Improving Bayesian analysis for LISA Pathfinder using an efficient Markov Chain Monte Carlo method

NASA Astrophysics Data System (ADS)

We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of the LISA Pathfinder satellite. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to LISA Pathfinder data. For this experiment, we return parameter values that are all within ˜1 ? of the injected values. When we analyse the accuracy of our parameter estimation in terms of the effect they have on the force-per-unit of mass noise, we find that the induced errors are three orders of magnitude less than the expected experimental uncertainty in the power spectral density.

Ferraioli, Luigi; Porter, Edward K.; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Gibert, Ferran; Hewitson, Martin; Hueller, Mauro; Karnesis, Nikolaos; Korsakova, Natalia; Nofrarias, Miquel; Plagnol, Eric; Vitale, Stefano

2014-02-01

271

Four different probabilistic risk assessment methods were compared using the data from the Sangamo Weston\\/Lake Hartwell Superfund site. These were one-dimensional Monte Carlo, two-dimensional Monte Carlo considering uncertainty in the concentration term, two-dimensional Monte Carlo considering uncertainty in ingestion rate, and microexposure event analysis. Estimated high-end risks ranged from 2.0×10 to 3.3×10. Microexposure event analysis produced a lower risk estimate

Ted W. Simon

1999-01-01

272

HRMC_1.1: Hybrid Reverse Monte Carlo method with silicon and carbon potentials

NASA Astrophysics Data System (ADS)

The Hybrid Reverse Monte Carlo (HRMC) code models the atomic structure of materials via the use of a combination of constraints including experimental diffraction data and an empirical energy potential. This energy constraint is in the form of either the Environment Dependent Interatomic Potential (EDIP) for carbon and silicon and the original and modified Stillinger-Weber potentials applicable to silicon. In this version, an update is made to correct an error in the EDIP carbon energy calculation routine. New version program summaryProgram title: HRMC version 1.1 Catalogue identifier: AEAO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 36 991 No. of bytes in distributed program, including test data, etc.: 907 800 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: Any computer capable of running executables produced by the g77 Fortran compiler. Operating system: Unix, Windows RAM: Depends on the type of empirical potential use, number of atoms and which constraints are employed. Classification: 7.7 Catalogue identifier of previous version: AEAO_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 777 Does the new version supersede the previous version?: Yes Nature of problem: Atomic modelling using empirical potentials and experimental data. Solution method: Monte Carlo Reasons for new version: An error in a term associated with the calculation of energies using the EDIP carbon potential which results in incorrect energies. Summary of revisions: Fix to correct brackets in the two body part of the EDIP carbon potential routine. Additional comments: The code is not standard FORTRAN 77 but includes some additional features and therefore generates errors when compiled using the Nag95 compiler. It does compile successfully with the GNU g77 compiler ( http://www.gnu.org/software/fortran/fortran.html). Running time: Depends on the type of empirical potential use, number of atoms and which constraints are employed. The test included in the distribution took 37 minutes on a DEC Alpha PC.

Opletal, G.; Petersen, T. C.; O'Malley, B.; Snook, I. K.; McCulloch, D. G.; Yarovsky, I.

2011-02-01

273

Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations

Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm{sup 2}) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within {+-}1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within {+-}2.0%({sigma}{sub MC}=0.8%) in lung ({rho}=0.24 g cm{sup -3}) and within {+-}2.9%({sigma}{sub MC}=0.8%) in low-density lung ({rho}=0.1 g cm{sup -3}). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity ({rho}=0.001 g cm{sup -3}) were found to be within the range of {+-}1.5% to {+-}4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark ({sigma}{sub MC}=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within {+-}2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is capable of modeling radiotherapy dose deposition with accuracy only previously achievable with Monte Carlo techniques.

Bush, K.; Gagne, I. M.; Zavgorodni, S.; Ansbacher, W.; Beckham, W. [Department of Medical Physics, British Columbia Cancer Agency-Vancouver Island Center, Victoria, British Columbia V8R 6V5 (Canada)

2011-04-15

274

Accuracy of Monte Carlo Criticality Calculations During BR2 Operation

The Belgian Material Test Reactor BR2 is a strongly heterogeneous high-flux engineering test reactor at SCK-CEN (Centre d'Etude de l'Energie Nucleaire) in Mol with a thermal power of 60 to 100 MW. It deploys highly enriched uranium, water-cooled concentric plate fuel elements, positioned inside a beryllium reflector with a complex hyperboloid arrangement of test holes. The objective of this paper is to validate the MCNP and ORIGEN-S three-dimensional (3-D) model for reactivity predictions of the entire BR2 core during reactor operation. We employ the Monte Carlo code MCNP-4C to evaluate the effective multiplication factor k{sub eff} and 3-D space-dependent specific power distribution. The one-dimensional code ORIGEN-S is used to calculate the isotopic fuel depletion versus burnup and to prepare a database with depleted fuel compositions. The approach taken is to evaluate the 3-D power distribution at each time step and along with the database to evaluate the 3-D isotopic fuel depletion at the next step and to deduce the corresponding shim rod positions of the reactor operation. The capabilities of both codes are fully exploited without constraints on the number of involved isotope depletion chains or an increase of the computational time. The reactor has a complex operation, with important shutdowns between cycles, and its reactivity is strongly influenced by poisons, mainly {sup 3}He and {sup 6}Li from the beryllium reflector, and the burnable absorbers {sup 149}Sm and {sup 10}B in the fresh UAl{sub x} fuel. The computational predictions for the shim rod positions at various restarts are within 0.5 $ ({beta}{sub eff} = 0.0072)

Kalcheva, Silva; Koonen, Edgar; Ponsard, Bernard [SCK-CEN (Belgium)

2005-08-15

275

NASA Astrophysics Data System (ADS)

A method of modelling the dynamic motion of multileaf collimators (MLCs) for intensity-modulated radiation therapy (IMRT) was developed and implemented into the Monte Carlo simulation. The simulation of the dynamic MLCs (DMLCs) was based on randomizing leaf positions during a simulation so that the number of particle histories being simulated for each possible leaf position was proportional to the monitor units delivered to that position. This approach was incorporated into an EGS4 Monte Carlo program, and was evaluated in simulating the DMLCs for Varian accelerators (Varian Medical Systems, Palo Alto, CA, USA). The MU index of each segment, which was specified in the DMLC-control data, was used to compute the cumulative probability distribution function (CPDF) for the leaf positions. This CPDF was then used to sample the leaf positions during a real-time simulation, which allowed for either the step-shoot or sweeping-leaf motion in the beam delivery. Dose intensity maps for IMRT fields were computed using the above Monte Carlo method, with its accuracy verified by film measurements. The DMLC simulation improved the operational efficiency by eliminating the need to simulate multiple segments individually. More importantly, the dynamic motion of the leaves could be simulated more faithfully by using the above leaf-position sampling technique in the Monte Carlo simulation.

Liu, H. Helen; Verhaegen, Frank; Dong, Lei

2001-09-01

276

A ground state potential energy surface for H2 using Monte Carlo methods

Using variational Monte Carlo and a simple explicitly correlated wave function we have computed the Born–Oppenheimer energy of the H2 ground state (X 1?g+) at 24 internuclear distances. We have also calculated the diagonal correction to the Born–Oppenheimer approximation and the lowest-order relativistic corrections at each distance using variational Monte Carlo techniques. The nonadiabatic values are evaluated from numerical derivatives

S. A. Alexander; R. L. Coldwell

2004-01-01

277

A ground state potential energy surface for H2 using Monte Carlo methods

Using variational Monte Carlo and a simple explicitly correlated wave function we have computed the Born-Oppenheimer energy of the H2 ground state (X 1Sigmag+) at 24 internuclear distances. We have also calculated the diagonal correction to the Born-Oppenheimer approximation and the lowest-order relativistic corrections at each distance using variational Monte Carlo techniques. The nonadiabatic values are evaluated from numerical derivatives

S. A. Alexander; R. L. Coldwell

2004-01-01

278

Structural properties of sodium microclusters (n=4-34) using a Monte Carlo growth method

The structural and electronic properties of small sodium clusters are investigated using a distance-dependent extension of the tight-binding (Hu¨ckel) model and a Monte Carlo growth algorithm for the search of the lowest energy isomers. The efficiency and advantages of the Monte Carlo growth algorithm are discussed and the building scheme of sodium microclusters around constituting seeds is explained in details.

Romuald Poteau; Fernand Spiegelmann

1993-01-01

279

Inverse Monte-Carlo and Demon Methods for Effective Polyakov Loop Models of SU(N)-YM

We study effective Polyakov loop models for SU(N) Yang-Mills theories at finite temperature. In particular effective models for SU(3) YM with an additional adjoint Polyakov loop potential are considered. The rich phase structure including a center and anti-center directed phase is reproduced with an effective model utilizing the inverse Monte-Carlo method. The demon method as a possibility to obtain the effective models' couplings is compared to the method of Schwinger-Dyson equations. Thermalization effects of microcanonical and canonical demon method are analyzed. Finally the elaborate canonical demon method is applied to the finite temperature SU(4) YM phase transition.

Christian Wozar; Tobias Kaestner; Bjoern H. Wellegehausen; Andreas Wipf; Thomas Heinzl

2008-08-29

280

Adaptive {delta}f Monte Carlo Method for Simulation of RF-heating and Transport in Fusion Plasmas

Essential for modeling heating and transport of fusion plasma is determining the distribution function of the plasma species. Characteristic for RF-heating is creation of particle distributions with a high energy tail. In the high energy region the deviation from a Maxwellian distribution is large while in the low energy region the distribution is close to a Maxwellian due to the velocity dependency of the collision frequency. Because of geometry and orbit topology Monte Carlo methods are frequently used. To avoid simulating the thermal part, {delta}f methods are beneficial. Here we present a new {delta}f Monte Carlo method with an adaptive scheme for reducing the total variance and sources, suitable for calculating the distribution function for RF-heating.

Hoeoek, J.; Hellsten, T. [Fusion Plasma Physics, School of Electrical Engineering, Royal Institute of Technology (KTH), SE-100 44, Stockholm, Association VR-Euratom (Sweden)

2009-11-26

281

NASA Astrophysics Data System (ADS)

Our study aim to design a useful neutron signature characterization device based on 3He detectors, a standard neutron detection methodology used in homeland security applications. Research work involved simulation of the generation, transport, and detection of the leakage radiation from Special Nuclear Materials (SNM). To accomplish research goals, we use a new methodology to fully characterize a standard "1-Ci" Plutonium-Beryllium (Pu-Be) neutron source based on 3-D computational radiation transport methods, employing both deterministic SN and Monte Carlo methodologies. Computational model findings were subsequently validated through experimental measurements. Achieved results allowed us to design, build, and laboratory-test a Nickel composite alloy shield that enables the neutron leakage spectrum from a standard Pu-Be source to be transformed, through neutron scattering interactions in the shield, into a very close approximation of the neutron spectrum leaking from a large, subcritical mass of Weapons Grade Plutonium (WGPu) metal. This source will make possible testing with a nearly exact reproduction of the neutron spectrum from a 6.67 kg WGPu mass equivalent, but without the expense or risk of testing detector components with real materials. Moreover, over thirty moderator materials were studied in order to characterize their neutron energy filtering potential. Specific focus was made to establish the limits of He-3 spectroscopy using ideal filter materials. To demonstrate our methodology, we present the optimally detected spectral differences between SNM materials (Plutonium and Uranium), metal and oxide, using ideal filter materials. Finally, using knowledge gained from previous studies, the design of a He-3 spectroscopy system neutron detector, simulated entirely via computational methods, is proposed to resolve the spectra from SNM neutron sources of high interest. This was accomplished by replacing ideal filters with real materials, and comparing reaction rates with similar data from the ideal material suite.

Ghita, Gabriel M.

282

The Unified Monte Carlo method (UMC) has been suggested to avoid certain limitations and approximations inherent to the well-known Generalized Least Squares (GLS) method of nuclear data evaluation. This contribution reports on an investigation of the performance of the UMC method in comparison with the GLS method. This is accomplished by applying both methods to simple examples with few input values that were selected to explore various features of the evaluation process that impact upon the quality of an evaluation. Among the issues explored are: i) convergence of UMC results with the number of Monte Carlo histories and the ranges of sampled values; ii) a comparison of Monte Carlo sampling using the Metropolis scheme and a brute force approach; iii) the effects of large data discrepancies; iv) the effects of large data uncertainties; v) the effects of strong or weak model or experimental data correlations; and vi) the impact of ratio data and integral data. Comparisons are also made of the evaluated results for these examples when the input values are first transformed to comparable logarithmic values prior to performing the evaluation. Some general conclusions that are applicable to more realistic evaluation exercises are offered.

Capote, Roberto [Nuclear Data Section, International Atomic Energy Agency, P.O. Box 100, Wagramer Strasse 5, A-1400 Vienna (Austria)], E-mail: Roberto.CapoteNoy@iaea.org; Smith, Donald L. [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, California 92118-3073 (United States)

2008-12-15

283

The choice of appropriate interaction models is among the major disadvantages of conventional methods such as molecular dynamics and Monte Carlo simulations. On the other hand, the so-called reverse Monte Carlo (RMC) method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the hybrid reverse Monte Carlo (HRMC) method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a fluoride glass system BaMnMF_{7} (M=Fe,V) using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.

S. M. Mesli; M. Habchi; M. Kotbi; H. Xu

2013-03-25

284

NASA Astrophysics Data System (ADS)

The Unified Monte Carlo method (UMC) has been suggested to avoid certain limitations and approximations inherent to the well-known Generalized Least Squares (GLS) method of nuclear data evaluation. This contribution reports on an investigation of the performance of the UMC method in comparison with the GLS method. This is accomplished by applying both methods to simple examples with few input values that were selected to explore various features of the evaluation process that impact upon the quality of an evaluation. Among the issues explored are: i) convergence of UMC results with the number of Monte Carlo histories and the ranges of sampled values; ii) a comparison of Monte Carlo sampling using the Metropolis scheme and a brute force approach; iii) the effects of large data discrepancies; iv) the effects of large data uncertainties; v) the effects of strong or weak model or experimental data correlations; and vi) the impact of ratio data and integral data. Comparisons are also made of the evaluated results for these examples when the input values are first transformed to comparable logarithmic values prior to performing the evaluation. Some general conclusions that are applicable to more realistic evaluation exercises are offered.

Capote, Roberto; Smith, Donald L.

2008-12-01

285

Forward-Weighted CADIS Method for Variance Reduction of Monte Carlo Reactor Analyses

Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses use high-fidelity transport codes to produce few-group parameters at the assembly level for use in low-order methods applied at the core level. Monte Carlo (MC) methods, which allow detailed and accurate modeling of the full geometry and energy details and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the several-decade-old methodology used in current practice. However, the prohibitive computational requirements associated with obtaining fully converged system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. A goal of current research at Oak Ridge National Laboratory (ORNL) is to change this paradigm by enabling the direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome is the slow non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, research has focused on development in the following two areas: (1) a hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The focus of this paper is limited to the first area mentioned above. It describes the FW-CADIS method applied to variance reduction of MC reactor analyses and provides initial results for calculating group-wise fluxes throughout a generic 2-D pressurized water reactor (PWR) quarter core model.

Wagner, John C [ORNL] [ORNL; Mosher, Scott W [ORNL] [ORNL

2010-01-01

286

We report a new version of the diffusion Monte Carlo (DMC) method, based on coherent-state quantum mechanics. Randomly selected grids of coherent states in phase space are used to obtain numerical imaginary time solutions of the Schrödinger equation, with an iterative refinement technique to improve the quality of the Monte Carlo grid. Accurate results were obtained, for the appropriately symmetrized

Dmitrii V. Shalashilin; Mark S. Child

2005-01-01

287

Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods

NASA Astrophysics Data System (ADS)

During extreme-Mach number reentry into Earth's atmosphere, spacecraft experience hypersonic non-equilibrium flow conditions that dissociate molecules and ionize atoms. Such situations occur behind a shock wave leading to high temperatures, which have an adverse effect on the thermal protection system and radar communications. Since the electronic energy levels of gaseous species are strongly excited for high Mach number conditions, the radiative contribution to the total heat load can be significant. In addition, radiative heat source within the shock layer may affect the internal energy distribution of dissociated and weakly ionized gas species and the number density of ablative species released from the surface of vehicles. Due to the radiation total heat load to the heat shield surface of the vehicle may be altered beyond mission tolerances. Therefore, in the design process of spacecrafts the effect of radiation must be considered and radiation analyses coupled with flow solvers have to be implemented to improve the reliability during the vehicle design stage. To perform the first stage for radiation analyses coupled with gas-dynamics, efficient databasing schemes for emission and absorption coefficients were developed to model radiation from hypersonic, non-equilibrium flows. For bound-bound transitions, spectral information including the line-center wavelength and assembled parameters for efficient calculations of emission and absorption coefficients are stored for typical air plasma species. Since the flow is non-equilibrium, a rate equation approach including both collisional and radiatively induced transitions was used to calculate the electronic state populations, assuming quasi-steady-state (QSS). The Voigt line shape function was assumed for modeling the line broadening effect. The accuracy and efficiency of the databasing scheme was examined by comparing results of the databasing scheme with those of NEQAIR for the Stardust flowfield. An accuracy of approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking into account non-local radiation. The definition of the escape factor which is depende

Sohn, Ilyoup

288

NASA Astrophysics Data System (ADS)

We study a simulation of spectral reflectance in human skin tissue using ray-tracing software and the Monte Carlo method on the basis of a graphics processing unit (GPU). An analysis of light propagation using ray-tracing software has several advantages in that it can readily reproduce the complex structure of skin tissue, such as grooves of the skin surface or the boundaries of skin tissue layers, and perform optical simulation with optical elements close to those in a real experiment using only the ray-tracing software. Meanwhile, it has a serious disadvantage in that the simulation time is extremely long because the algorithm is CPU-based. To overcome this disadvantage, we propose a simulation method using the ray-tracing software and a GPU-based Monte Carlo simulation (MCS). The results of the simulation are shown and discussed.

Funamizu, Hideki; Maeda, Takaaki; Sasaki, Shoko; Nishidate, Izumi; Aizu, Yoshihisa

2014-05-01

289

Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method

NASA Astrophysics Data System (ADS)

The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the ?-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, gp(Ep) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called "corrected EAM" (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients Sij are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature Tm is plotted in terms of the cluster atom number Nat. The standard N_{at}^{-1/3} linear dependence (Pawlow law) is observed for Nat >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For Nat <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I.

Basire, M.; Soudan, J.-M.; Angelié, C.

2014-09-01

290

Geometrically-compatible 3-D Monte Carlo and discrete-ordinates methods

This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project was two-fold. The first purpose was to develop a deterministic discrete-ordinates neutral-particle transport scheme for unstructured tetrahedral spatial meshes, and implement it in a computer code. The second purpose was to modify the MCNP Monte Carlo radiation transport code to use adjoint solutions from the tetrahedral-mesh discrete-ordinates code to reduce the statistical variance of Monte Carlo solutions via a weight-window approach. The first task has resulted in a deterministic transport code that is much more efficient for modeling complex 3-D geometries than any previously existing deterministic code. The second task has resulted in a powerful new capability for dramatically reducing the cost of difficult 3-D Monte Carlo calculations.

Morel, J.E.; Wareing, T.A.; McGhee, J.M.; Evans, T.M.

1998-12-31

291

NASA Astrophysics Data System (ADS)

The static disorder and lattice dynamics of crystalline materials can be efficiently studied using reverse Monte Carlo simulations of extended x-ray absorption fine structure spectra (EXAFS). In this work we demonstrate the potentiality of this method on an example of copper tungstate CuWO4. The simultaneous analysis of the Cu K and W L3 edges EXAFS spectra allowed us to follow local structure distortion as a function of temperature.

Timoshenko, Janis; Anspoks, Andris; Kalinko, Aleksandr; Kuzmin, Alexei

2014-04-01

292

The TSUNAMI computational sequences currently in the SCALE 5 code system provide an automated approach to performing sensitivity and uncertainty analysis for eigenvalue responses, using either one-dimensional discrete ordinates or three-dimensional Monte Carlo methods. This capability has recently been expanded to address eigenvalue-difference responses such as reactivity changes. This paper describes the methodology and presents results obtained for an example advanced CANDU reactor design. (authors)

Williams, M. L.; Gehin, J. C.; Clarno, K. T. [Oak Ridge National Laboratory, Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)

2006-07-01

293

A Mesh-Input-Free On-The-Fly Source Convergence Indicator in Monte Carlo Power Method

A mesh-input-free on-the-fly source convergence indicator is proposed for Monte Carlo source iterations by the power method. This indicator consists of two step computations of source particle centers. In the first step, the geometric center of all source particles is computed. The spatial domain is then divided into eight subdomains by the xy, yz, and zx planes that intersect at

Taro Ueki; Bryan S. Chapman

2011-01-01

294

The quantum Drude oscillator (QDO) model, which allows many-body polarization and dispersion to be treated both on an equal footing and beyond the dipole limit, is investigated using two approaches to the linear scaling diffusion Monte Carlo (DMC) technique. The first is a general purpose norm-conserving DMC (NC-DMC) method wherein the number of walkers, N , remains strictly constant thereby

Andrew Jones; Andrew Thompson; Jason Crain; Martin H. Müser; Glenn J. Martyna

2009-01-01

295

Neutrino transport in type II supernovae: Boltzmann solver vs. Monte Carlo method

NASA Astrophysics Data System (ADS)

We have coded a Boltzmann solver based on a finite difference scheme (S_N method) aiming at calculations of neutrino transport in type II supernovae. Close comparison between the Boltzmann solver and a Monte Carlo transport code has been made for realistic atmospheres of post bounce core models under the assumption of a static background. We have also investigated in detail the dependence of the results on the numbers of radial, angular, and energy grid points and the way to discretize the spatial advection term which is used in the Boltzmann solver. A general relativistic calculation has been done for one of the models. We find good overall agreement between the two methods. This gives credibility to both methods which are based on completely different formulations. In particular, the number and energy fluxes and the mean energies of the neutrinos show remarkably good agreement, because these quantities are determined in a region where the angular distribution of the neutrinos is nearly isotropic and they are essentially frozen in later on. On the other hand, because of a relatively small number of angular grid points (which is inevitable due to limitations of the computation time) the Boltzmann solver tends to slightly underestimate the flux factor and the Eddington factor outside the (mean) ``neutrinosphere'' where the angular distribution of the neutrinos becomes highly anisotropic. As a result, the neutrino number (and energy) density is somewhat overestimated in this region. This fact suggests that the Boltzmann solver should be applied to calculations of the neutrino heating in the hot-bubble region with some caution because there might be a tendency to overestimate the energy deposition rate in disadvantageous situations. A comparison shows that this trend is opposite to the results obtained with a multi-group flux-limited diffusion approximation of neutrino transport. Employing three different flux limiters, we find that all of them lead to a significant underestimation of the neutrino energy density in the semitransparent regime, and thus must yield too low values for the net neutrino heating (heating minus cooling) in the hot-bubble region. The accuracy of the Boltzmann solver can be improved by using a variable angular mesh to increase the angular resolution in the region where the neutrino distribution becomes anisotropic.

Yamada, Shoichi; Janka, Hans-Thomas; Suzuki, Hideyuki

1999-04-01

296

Spray cooling simulation implementing time scale analysis and the Monte Carlo method

NASA Astrophysics Data System (ADS)

Spray cooling research is advancing the field of heat transfer and heat rejection in high power electronics. Smaller and more capable electronics packages are producing higher amounts of waste heat, along with smaller external surface areas, and the use of active cooling is becoming a necessity. Spray cooling has shown extremely high levels of heat rejection, of up to 1000 W/cm 2 using water. Simulations of spray cooling are becoming more realistic, but this comes at a price. A previous researcher has used CFD to successfully model a single 3D droplet impact into a liquid film using the level set method. However, the complicated multiphysics occurring during spray impingement and surface interactions increases computation time to more than 30 days. Parallel processing on a 32 processor system has reduced this time tremendously, but still requires more than a day. The present work uses experimental and computational results in addition to numerical correlations representing the physics occurring on a heated impingement surface. The current model represents the spray behavior of a Spraying Systems FullJet 1/8-g spray nozzle. Typical spray characteristics are indicated as follows: flow rate of 1.05x10-5 m3/s, normal droplet velocity of 12 m/s, droplet Sauter mean diameter of 48 microm, and heat flux values ranging from approximately 50--100 W/cm2 . This produces non-dimensional numbers of: We 300--1350, Re 750--3500, Oh 0.01--0.025. Numerical and experimental correlations have been identified representing crater formation, splashing, film thickness, droplet size, and spatial flux distributions. A combination of these methods has resulted in a Monte Carlo spray impingement simulation model capable of simulating hundreds of thousands of droplet impingements or approximately one millisecond. A random sequence of droplet impingement locations and diameters is generated, with the proper radial spatial distribution and diameter distribution. Hence the impingement, lifetime and interactions of the droplet impact craters are tracked versus time within the limitations of the current model. A comparison of results from this code to experimental results shows similar trends in surface behavior and heat transfer values. Three methods have been used to directly compare the simulation results with published experimental data, including: contact line length estimates, empirical heat transfer equation calculations, and non-dimensional Nusselt numbers. A Nusselt number of 55.5 was calculated for experimental values, while a Nu of 16.0 was calculated from the simulation.

Kreitzer, Paul Joseph

297

NASA Astrophysics Data System (ADS)

Two of the primary challenges associated with the neutronic analysis of the Very High Temperature Reactor (VHTR) are accounting for resonance self-shielding in the particle fuel (contributing to the double heterogeneity) and accounting for temperature feedback due to Doppler broadening. The double heterogeneity challenge is addressed by defining a "double heterogeneity factor" (DHF) that allows conventional light water reactor (LWR) lattice physics codes to analyze VHTR configurations. The challenge of treating Doppler broadening is addressed by a new "on-the-fly" methodology that is applied during the random walk process with negligible impact on computational efficiency. Although this methodology was motivated by the need to treat temperature feedback in a VHTR, it is applicable to any reactor design. The on-the-fly Doppler methodology is based on a combination of Taylor and asymptotic series expansions. The type of series representation was determined by investigating the temperature dependence of U238 resonance cross sections in three regions: near the resonance peaks, mid-resonance, and the resonance wings. The coefficients for these series expansions were determined by regressions over the energy and temperature range of interest. The comparison of the broadened cross sections using this methodology with the NJOY cross sections was excellent. A Monte Carlo code was implemented to apply the combined regression model and used to estimate the additional computing cost which was found to be less than 1%. The DHF accounts for the effect of the particle heterogeneity on resonance absorption in particle fuel. The first level heterogeneity posed by the VHTR fuel particles is a unique characteristic that cannot be accounted for by conventional LWR lattice physics codes. On the other hand, Monte Carlo codes can take into account the detailed geometry of the VHTR including resolution of individual fuel particles without performing any type of resonance approximation. The DHF, basically a self shielding factor, was found to be weakly dependent on space and fuel depletion. The DHF only depends strongly on the packing fraction in a fuel compact. Therefore, it is proposed that DHFs be tabulated as a function of packing fraction to analyze the heterogeneous fuel in VHTR configuration with LWR lattice physics codes.

Yesilyurt, Gokhan

298

NASA Astrophysics Data System (ADS)

We present a model-order reduction technique that overcomes the computational burden associated with the application of Monte Carlo methods to the solution of the groundwater flow equation with random hydraulic conductivity. The method is based on the Galerkin projection of the high-dimensional model equations onto a subspace, approximated by a small number of pseudo-optimally chosen basis functions (principal components). To obtain an efficient reduced-order model, we develop an offline algorithm for the computation of the parameter-independent principal components. Our algorithm combines a greedy algorithm for the snapshot selection in the parameter space and an optimal distribution of the snapshots in time. Moreover, we introduce a residual-based estimation of the error associated with the reduced model. This estimation allows a considerable reduction of the number of full system model solutions required for the computation of principal components. We demonstrate the robustness of our methodology by way of numerical examples, comparing the empirical statistics of the ensemble of the numerical solutions obtained using the traditional Monte Carlo method and our reduced model. The numerical results show that our methodology significantly reduces the computational requirements (CPU time and storage) for the solution of the Monte Carlo simulation, ensuring a good approximation of the mean and variance of the head. The analysis of the empirical probability density functions at the observation wells suggests that our reduced model produces good results and is most accurate in the regions with large drawdown.

Pasetto, Damiano; Putti, Mario; Yeh, William W.-G.

2013-06-01

299

Experiments were carried out to investigate the possible use of neutron backscattering for the detection of landmines buried in the soil. Several landmines, buried in a sand-pit, were positively identified. A series of Monte Carlo simulations were performed to study the complexity of the neutron backscattering process and to optimize the geometry of a future prototype. The results of these

C. P. Datema; V. R. Bom; C. W. E. van Eijk

2002-01-01

300

WinBUGS for Population Ecologists: Bayesian Modeling Using Markov Chain Monte Carlo Methods

The computer package WinBUGS is introduced. We first give a brief introduction to Bayesian theory and its implementation using Markov chain Monte Carlo (MCMC) algorithms. We then present three case studies showing how WinBUGS can be used when classical theory is difficult to implement. The first example uses data on white storks from Baden W¨ urttemberg, Germany, to demon- strate

Olivier Gimenez; Simon J. Bonner; Ruth King; Richard A. Parker; Stephen P. Brooks; Lara E. Jamieson; Vladimir Grosbois; Byron J. T. Morgan; Len Thomas

301

Applied Probability Trust (25 February 2008) MONTE CARLO METHODS FOR SENSITIVITY ANALYSIS OF

separable metric space M. Under the probability measure P, the intensity of the Poisson point process Poisson processes, with respect to the intensity of the process. As our main result, we provide Monte in stochastic geometry and insurance. Keywords: Importance sampling, Marked Poisson processes, Monte Carlo

Bordenave, Charles

302

Three-dimensional forest light interaction model using a Monte Carlo method

A model for light interaction with forest canopies is presented, based on Monte Carlo simulation of photon transport. A hybrid representation is used to model the discontinuous nature of the forest canopy. Large scale structure is represented by geometric primitives defining shapes and positions of the tree crowns and trunks. Foliage is represented within crowns by volume-averaged parameters describing the

Peter R. J. North

1996-01-01

303

Evaluation of the material assignment method used by a Monte Carlo treatment planning system

An evaluation of the conversion process from Hounsfield units (HU) to material composition in computerised tomography (CT) images, employed by the Monte Carlo based treatment planning system ISOgray™ (DOSIsoft), is presented. A boundary in the HU for the material conversion between “air” and “lung” materials was determined based on a study using 22 patients. The dosimetric consequence of the new

A. Isambert; L. Brualla; D. Lefkopoulos

2009-01-01

304

Current impulse response of thin InP p+-i-n+ diodes using full band structure Monte Carlo method

NASA Astrophysics Data System (ADS)

A random response time model to compute the statistics of the avalanche buildup time of double-carrier multiplication in avalanche photodiodes (APDs) using full band structure Monte Carlo (FBMC) method is discussed. The effect of feedback impact ionization process and the dead-space effect on random response time are included in order to simulate the speed of APD. The time response of InP p+-i-n+ diodes with the multiplication region of 0.2?m is presented. Finally, the FBMC model is used to calculate the current impulse response of the thin InP p+-i-n+ diodes with multiplication lengths of 0.05 and 0.2?m using Ramo's theorem [Proc. IRE 27, 584 (1939)]. The simulated current impulse response of the FBMC model is compared to the results simulated from a simple Monte Carlo model.

You, A. H.; Cheang, P. L.

2007-02-01

305

Development of CT scanner models for patient organ dose calculations using Monte Carlo methods

NASA Astrophysics Data System (ADS)

There is a serious and growing concern about the CT dose delivered by diagnostic CT examinations or image-guided radiation therapy imaging procedures. To better understand and to accurately quantify radiation dose due to CT imaging, Monte Carlo based CT scanner models are needed. This dissertation describes the development, validation, and application of detailed CT scanner models including a GE LightSpeed 16 MDCT scanner and two image guided radiation therapy (IGRT) cone beam CT (CBCT) scanners, kV CBCT and MV CBCT. The modeling process considered the energy spectrum, beam geometry and movement, and bowtie filter (BTF). The methodology of validating the scanner models using reported CTDI values was also developed and implemented. Finally, the organ doses to different patients undergoing CT scan were obtained by integrating the CT scanner models with anatomically-realistic patient phantoms. The tube current modulation (TCM) technique was also investigated for dose reduction. It was found that for RPI-AM, thyroid, kidneys and thymus received largest dose of 13.05, 11.41 and 11.56 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. For RPI-AF, thymus, small intestine and kidneys received largest dose of 10.28, 12.08 and 11.35 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. The dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. For MDCT with TCM schemas, the fetal dose can be reduced with 14%-25%. To demonstrate the applicability of the method proposed in this dissertation for modeling the CT scanner, additional MDCT scanner was modeled and validated by using the measured CTDI values. These results demonstrated that the CT scanner models in this dissertation were versatile and accurate tools for estimating dose to different patient phantoms undergoing various CT procedures. The organ doses from kV and MV CBCT were also calculated. This dissertation finally summarizes areas where future research can be performed including MV CBCT further validation and application, dose reporting software and image and dose correlation study.

Gu, Jianwei

306

AN ASSESSMENT OF MCNP WEIGHT WINDOWS

The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight

J. S. HENDRICKS; C. N. CULBERTSON

2000-01-01

307

The thermal neutron fluence rate is determined by the gold activation method. The absolute activity of the irradiated gold foil is measured by a 4??-? coincidence counter. Using this method, corrections for the detection of conversion electrons and gamma rays by a 4?? counter are very important to obtain accurate absolute activity. In this work, Monte Carlo simulations were performed to derive the correction factor K. The absolute measurement of (198)Au activity for 20-100 ?m thickness Au foils were performed to verify the calculating model of the 4??-? coincidence counting system. PMID:21406431

Nishiyama, Jun; Harano, Hideki; Matsumoto, Tetsuro; Sato, Yasushi; Uritani, Akira; Kudo, Katsuhisa

2012-01-01

308

In this paper we model by using the Monte Carlo simulation code PENELOPE [1, 2] a Broad Energy Germanium (BEGe) detector and determine its efficiency. The simulated geometry consists of a point source located close to the detector as well as volume sources with cylindrical geometry. A comparison of the simulation is made to experimental results as well as to analytical calculations.

Stefanakis, N

2014-01-01

309

NASA Astrophysics Data System (ADS)

Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is unclear to us. We assume the proportion between Hc and Ht as a random number in the interval from 2.0 to 3.0. De Wit's sphere leaf angle distribution function was used within the canopy for acceleration. Finally, we simulate the open forest albedo using MCRT method. The MCRT algorithm of this study is summarized as follows (1) Initialize the photon with a position (r0), source direction (?0) and intensity (I0), respectively. (2) Simulate the free path (s) of a photon under the condition of (r', ?, I') in the canopy. (3) Calculate the new position of the photon r=r +s?'. (4) Determine the new scattering direction (?)after collision at, r and then calculate the new intensity I = ?L(?L,?'-->?)I'.(5) Accumulate the intensity I of a photon escaping from the top boundary of the 3-D Scene, otherwise redo from step (2), until I is smaller than a threshold. (6) Repeat from step (1), for each photon. We testify the model on four different simulated open forests and the effectiveness of the model is demonstrated in details.

Jin, Shengye; Tamura, Masayuki

2013-10-01

310

Vibrational-rotational energies of all H2 isotopomers using Monte Carlo methods

Using variational Monte Carlo techniques, we have computed several of the lowest rotational-vibrational energies of all the hydrogen molecule isotopomers (H2, HD, HT, D2, DT, and T2). These calculations do not require the excited states to be explicitly orthogonalized. We have examined both the usual Gaussian wave function form as well as a rapidly convergent Padé form. The high-quality potential

S. A. Alexander; R. L. Coldwell

2006-01-01

311

A study of nucleic acid base-stacking by the Monte Carlo method: Extended cluster approach

The adenine-thymine (AT), adenine-uracil (AU) and guanine-cytosine (GC) base associates in clusters containing 400 water molecules\\u000a were studied using a newly implemented Metropolis Monte Carlo algorithm based on the extended cluster approach. Starting from\\u000a the hydrogen-bonded Watson-Crick geometries, all three base pairs are transformed into more favorable stacked configurations\\u000a during the simulation. The obtained results show, for the first time,

Victor I. Danilov; Vladimir V. Dailidonis; Tanja van Mourik; Herbert A. Früchtl

2011-01-01

312

Object tracking in infrared image sequence by Monte-Carlo method

This paper presents a robust tracking algorithm for infrared objects in the image sequence, which is based on particle filer. Particle filter is a powerful tool for tracking especially in non-Gaussian condition, but the selection of samples is still a challenging problem. According to the frame-to-frame correlation, two basic assumptions are proposed. Borrowing the idea from Sequence Importance Sampling, Monte-Carlo

Qianli Ma; Min Wang

2010-01-01

313

Evaluation of the material assignment method used by a Monte Carlo treatment planning system.

An evaluation of the conversion process from Hounsfield units (HU) to material composition in computerised tomography (CT) images, employed by the Monte Carlo based treatment planning system ISOgray (DOSIsoft), is presented. A boundary in the HU for the material conversion between "air" and "lung" materials was determined based on a study using 22 patients. The dosimetric consequence of the new boundary was quantitatively evaluated for a lung patient plan. PMID:19854089

Isambert, A; Brualla, L; Lefkopoulos, D

2009-12-01

314

NASA Astrophysics Data System (ADS)

Numerical simulation of the interaction between light and tissue is important for the design and analysis of many optical imaging modalities. Most current simulations are based on the Transport Theory of light in a dielectric, and only calculate the intensity of light in a volume. These simulations do not provide phase information, which is important for many biomedical imaging systems. We are interested in obtaining the optical field, magnitude and phase, due to the interaction of light with tissue. Therefore, we need to solve the integral equation for scalar scattering in a volume of interest. Since the wavelength of light is in the order of nanometres, simulation of volumes of more than a few millimetres requires intensive computational resources. For large volumes, Monte Carlo methods are a suitable choice because their computational complexity is independent of the mathematical dimension of the problem. Also by a careful selection of the random sampling scheme the number of samples needed can be further reduced. In this paper we present an implementation of a method to solve Fredholm integral equations of the second kind using Reversible Jump Markov chain Monte Carlo (RJMCMC). This method could be used to simulate light in tissue with very large electrical size, meaning tissue whose physical dimensions are much larger than the wavelength of light, by solving the integral equation of scalar scattering over a large domain. We implemented this method based on RJMCMC and present in this paper the results of applying it to solve integral equations of one and two dimensions.

Pereira, Pedro F.; Sherif, Sherif S.

315

A Monte Carlo method was derived from the optical scattering properties of spheroidal particles and used for modeling diffuse photon migration in biological tissue. The spheroidal scattering solution used a separation of variables approach and numerical calculation of the light intensity as a function of the scattering angle. A Monte Carlo algorithm was then developed which utilized the scattering solution to determine successive photon trajectories in a three-dimensional simulation of optical diffusion and resultant scattering intensities in virtual tissue. Monte Carlo simulations using isotropic randomization, Henyey-Greenstein phase functions, and spherical Mie scattering were additionally developed and used for comparison to the spheroidal method. Intensity profiles extracted from diffusion simulations showed that the four models differed significantly. The depth of scattering extinction varied widely among the four models, with the isotropic, spherical, spheroidal, and phase function models displaying total extinction at depths of 3.62, 2.83, 3.28, and 1.95 cm, respectively. The results suggest that advanced scattering simulations could be used as a diagnostic tool by distinguishing specific cellular structures in the diffused signal. For example, simulations could be used to detect large concentrations of deformed cell nuclei indicative of early stage cancer. The presented technique is proposed to be a more physical description of photon migration than existing phase function methods. This is attributed to the spheroidal structure of highly scattering mitochondria and elongation of the cell nucleus, which occurs in the initial phases of certain cancers. The potential applications of the model and its importance to diffusive imaging techniques are discussed. PMID:24085080

Hart, Vern P; Doyle, Timothy E

2013-09-01

316

NASA Astrophysics Data System (ADS)

The talk examines a system of pairwise interaction particles, which models a rarefied gas in accordance with the nonlinear Boltzmann equation, the master equations of Markov evolution of this system and corresponding numerical Monte Carlo methods. Selection of some optimal method for simulation of rarefied gas dynamics depends on the spatial size of the gas flow domain. For problems with the Knudsen number Kn of order unity "imitation", or "continuous time", Monte Carlo methods ([2]) are quite adequate and competitive. However if Kn <= 0.1 (the large sizes), excessive punctuality, namely, the need to see all the pairs of particles in the latter, leads to a significant increase in computational cost(complexity). We are interested in to construct the optimal methods for Boltzmann equation problems with large enough spatial sizes of the flow. Speaking of the optimal, we mean that we are talking about algorithms for parallel computation to be implemented on high-performance multi-processor computers. The characteristic property of large systems is the weak dependence of sub-parts of each other at a sufficiently small time intervals. This property is taken into account in the approximate methods using various splittings of operator of corresponding master equations. In the paper, we develop the approximate method based on the splitting of the operator of master equations system "over groups of particles" ([7]). The essence of the method is that the system of particles is divided into spatial subparts which are modeled independently for small intervals of time, using the precise"imitation" method. The type of splitting used is different from other well-known type "over collisions and displacements", which is an attribute of the known Direct simulation Monte Carlo methods. The second attribute of the last ones is the grid of the "interaction cells", which is completely absent in the imitation methods. The main of talk is parallelization of the imitation algorithms with splitting using the MPI library. New constructed algorithms are applied to solve the problems: on propagation of the temperature discontinuity and on plane Poiseuille flow in the field of external forces. In particular, on the basis of numerical solutions, comparative estimates of the computational cost are given for all algorithms under consideration.

Khisamutdinov, A. I.; Velker, N. N.

2014-05-01

317

Modeling of microscale gas flows using the direct simulation Monte Carlo method

NASA Astrophysics Data System (ADS)

Microflows are defined as fluid flow phenomena associated with microscale mechanical devices. A great number of such devices have been manufactured over the last few decades using surface and bulk silicon micro-machining. Many of micro-electro-mechanical systems (MEMS) involve gaseous flows. Gas flows in MEMS with characteristic size on the order of microns differ from their larger counterparts. Three important flow parameters: Knudsen and Reynolds numbers and surface-to-volume ratio, are drastically different from those encountered in large scale flows. Accurate numerical modeling of microflows is indispensable for providing design capability for MEMS by predicting the flow field and performance characteristics. The main goal of the thesis research is the development, implementation and application of comprehensive direct simulation Monte Carlo approach to microscale gas flows. Flows in microthrusters and microchannels are commonly encountered in MEMS and are the focus of the study. The investigation of physical processes in three-dimensional micronozzles and the influence of Reynolds number, geometrical configuration and temperature regime have been carried out in the thesis. The impact of wall effects on thrust is examined for axisymmetric and two- and three-dimensional cold gas micronozzles. The flow in a flat micronozzle has a 3D structure and is strongly influenced by the end walls. The additional friction losses on the side walls cause a reduction in thrust of about 20% compared to the two-dimensional and axisymmetric nozzles. The work on coupled analysis of a microthruster is aimed at the developing a numerical simulation code capable of modeling the temporal variation of microthruster material temperature and performance characteristics. The application of the developed approach to two-dimensional and three-dimensional microthrusters gives several important insights into the dependence of performance characteristics on Reynolds number, thermal conditions and thruster geometry. The mass discharge of the microthruster have been found to decrease significantly in time due to increasing wall temperature. Such behavior of the mass discharge coefficient is obtained for both 2D and 3D models as well as for different stagnation pressures and geometrical shapes. Investigation of gas flows in microchannels with constriction have been carried out both analytically and numerically in order to understand the flow phenomena observed in experiments. An analytical model is developed to predict pressure drop and mass flow rate. The validation of the model is conducted by comparison with 2D DSMC calculations. The model accurately predicts the mass flow rate and pressure drop at the constriction section and compares well with the DSMC results. The DSMC simulations have shown that the flow separation may occur at the constriction. The presence of the separation zone does not influence the pressure distribution and the mass flow rate. The DSMC method have been applied for calibration of micro-Newton thrust stand and investigation of effects of the facility background. The DSMC calculations have been conducted for orifice flow for Kn = 0.01 to 40. It is found that for low Knudsen numbers the background gas contribution to the total force becomes significant. This is attributed to the jet shadowing effect, and, therefore, it must be included in modeling to permit a comparison with experiment.

Alexeenko, Alina A.

318

Building on the Markov chain formalism for scalar (intensity only) radiative transfer, this paper formulates the solution to polarized diffuse reflection from and transmission through a vertically inhomogeneous atmosphere. For verification, numerical results are compared to those obtained by the Monte Carlo method, showing deviations less than 1% when 90 streams are used to compute the radiation from two types of atmospheres, pure Rayleigh and Rayleigh plus aerosol, when they are divided into sublayers of optical thicknesses of less than 0.03. PMID:21263634

Xu, Feng; Davis, Anthony B; West, Robert A; Esposito, Larry W

2011-01-17

319

We introduce spin projection methods in the shell model Monte Carlo approach and apply them to calculate the spin distribution of level densities for iron-region nuclei using the complete (pf+g{sub 9/2}) shell. We compare the calculated distributions with the spin-cutoff model and extract an energy-dependent moment of inertia. For even-even nuclei and at low excitation energies, we observe a significant suppression of the moment of inertia and odd-even staggering in the spin dependence of level densities.

Alhassid, Y.; Liu, S. [Center for Theoretical Physics, Sloane Physics Laboratory, Yale University, New Haven, Connecticut 06520 (United States); Nakada, H. [Department of Physics, Chiba University, Inage, Chiba 263-8522 (Japan)

2007-10-19

320

We propose a new type of transition network for modeling of protein dynamics. The nodes of the network correspond to the conformations taken from random sampling of equilibrium ensemble available, e.g., by Monte Carlo simulations. Although this approach does not provide absolute values of folding/unfolding rates, it allows identification of reaction pathways, transition state ensemble, and, eventually, intermediates. The new method is verified by a comparison with direct molecular dynamic simulations performed for a coarse-grained G?-like model of proteins. As an illustrative example, we analyze kinetics of formation of a small ?-hairpin (Trp zipper 1) in the all-atom representation. PMID:22191905

Klenin, Konstantin V; Wenzel, Wolfgang

2011-12-21

321

Monte Carlo method was used to simulate the correction factors for electron loss and scattered photons for two improved cylindrical free-air ionization chambers (FACs) constructed at the Institute of Nuclear Energy Research (INER, Taiwan). The method is based on weighting correction factors for mono-energetic photons with X-ray spectra. The newly obtained correction factors for the medium-energy free-air chamber were compared with the current values, which were based on a least-squares fit to experimental data published in the NBS Handbook 64 [Wyckoff, H.O., Attix, F.H., 1969. Design of free-air ionization chambers. National Bureau Standards Handbook, No. 64. US Government Printing Office, Washington, DC, pp. 1-16; Chen, W.L., Su, S.H., Su, L.L., Hwang, W.S., 1999. Improved free-air ionization chamber for the measurement of X-rays. Metrologia 36, 19-24]. The comparison results showed the agreement between the Monte Carlo method and experimental data is within 0.22%. In addition, mono-energetic correction factors for the low-energy free-air chamber were calculated. Average correction factors were then derived for measured and theoretical X-ray spectra at 30-50 kVp. Although the measured and calculated spectra differ slightly, the resulting differences in the derived correction factors are less than 0.02%. PMID:16427292

Lin, Uei-Tyng; Chu, Chien-Hau

2006-05-01

322

NASA Astrophysics Data System (ADS)

The major purpose of this paper was to evaluate the Dorfman/Berbaum/Metz (DBM) method for analyzing multireader receiver operating characteristic (ROC) discrete rating data on reader split-plot and case split-plot designs. It is not always appropriate or practical for readers to interpret imaging studies of the same patients in all modalities. In split plot designs, either a different sample of readers is assigned to each modality or a different sample of cases is assigned to each modality. For each type of split-plot design, a series of null-case Monte Carlo simulations were conducted. The results suggest that the DBM method provides trustworthy alpha levels with discrete ratings when ROC area is not too large, and case and reader sample sizes are not too small. In other situations, the test tends to be somewhat conservative. Our Monte Carlo simulations show that the DBM multireader method can be validly extended to the reader-split and case- split plot designs.

Dorfman, Donald D.; Berbaum, Kevin S.; Lenth, Russell V.; Chen, Yeh F.

1999-05-01

323

NASA Technical Reports Server (NTRS)

Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.

Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.

1990-01-01

324

The Reduced-Basis Control-Variate Monte-Carlo method was introduced recently in [S. Boyaval and T. Leli\\`evre, CMS, 8 2010] as an improved Monte-Carlo method, for the fast estimation of many parametrized expected values at many parameter values. We provide here a more complete analysis of the method including precise error estimates and convergence results. We also numerically demonstrate that it can be useful to some parametrized frameworks in Uncertainty Quantification, in particular (i) the case where the parametrized expectation is a scalar output of the solution to a Partial Differential Equation (PDE) with stochastic coefficients (an Uncertainty Propagation problem), and (ii) the case where the parametrized expectation is the Bayesian estimator of a scalar output in a similar PDE context. Moreover, in each case, a PDE has to be solved many times for many values of its coefficients. This is costly and we also use a reduced basis of PDE solutions like in [S. Boyaval, C. Le Bris, Nguyen C., Y. Maday and T....

Boyaval, Sébastien

2012-01-01

325

This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

Brown, F.B.; Sutton, T.M.

1996-02-01

326

A new method to calculate the response of the WENDI-II rem counter using the FLUKA Monte Carlo Code

NASA Astrophysics Data System (ADS)

The FHT-762 WENDI-II is a commercially available wide range neutron rem counter which uses a 3He counter tube inside a polyethylene moderator. To increase the response above 10 MeV of kinetic neutron energy, a layer of tungsten powder is implemented into the moderator shell. For the purpose of the characterization of the response, a detailed model of the detector was developed and implemented for FLUKA Monte Carlo simulations. In common practice Monte Carlo simulations are used to calculate the neutron fluence inside the active volume of the detector. The resulting fluence is then folded offline with the reaction rate of the 3He(n,p)3H reaction to yield the proton-triton production rate. Consequently this approach does not consider geometrical effects like wall effects, where one or both reaction products leave the active volume of the detector without triggering a count. This work introduces a two-step simulation method which can be used to determine the detector's response, including geometrical effects, directly, using Monte Carlo simulations. A "first step" simulation identifies the 3He(n,p)3H reaction inside the active volume of the 3He counter tube and records its position. In the "second step" simulation the tritons and protons are started in accordance with the kinematics of the 3He(n,p)3H reaction from the previously recorded positions and a correction factor for geometrical effects is determined. The three dimensional Monte Carlo model of the detector as well as the two-step simulation method were evaluated and tested in the well-defined fields of an 241Am-Be(?,n) source as well as in the field of a 252Cf source. Results were compared with measurements performed by Gutermuth et al. [1] at GSI with an 241Am-Be(?,n) source as well as with measurements performed by the manufacturer in the field of a 252Cf source. Both simulation results show very good agreement with the respective measurements. After validating the method, the response values in terms of counts per unit fluence were calculated for 95 different incident neutron energies between 1 meV and 5 GeV.

Jägerhofer, Lukas; Feldbaumer, Eduard; Theis, Christian; Roesler, Stefan; Vincke, Helmut

2012-11-01

327

Many experimental systems consist of large ensembles of uncoupled or weakly interacting elements operating as a single whole; this is particularly the case for applications in nano-optics and plasmonics, including colloidal solutions, plasmonic or dielectric nanoparticles on a substrate, antenna arrays, and others. In such experiments, measurements of the optical spectra of ensembles will differ from measurements of the independent elements as a result of small variations from element to element (also known as polydispersity) even if these elements are designed to be identical. In particular, sharp spectral features arising from narrow-band resonances will tend to appear broader and can even be washed out completely. Here, we explore this effect of inhomogeneous broadening as it occurs in colloidal nanopolymers comprising self-assembled nanorod chains in solution. Using a technique combining finite-difference time-domain simulations and Monte Carlo sampling, we predict the inhomogeneously broadened optical spectra of these colloidal nanopolymers and observe significant qualitative differences compared with the unbroadened spectra. The approach combining an electromagnetic simulation technique with Monte Carlo sampling is widely applicable for quantifying the effects of inhomogeneous broadening in a variety of physical systems, including those with many degrees of freedom that are otherwise computationally intractable. PMID:24469797

Gudjonson, Herman; Kats, Mikhail A.; Liu, Kun; Nie, Zhihong; Kumacheva, Eugenia; Capasso, Federico

2014-01-01

328

We report several important observations that underscore the distinctions between the constrained-path Monte Carlo method and the continuum and lattice versions of the fixed-node method. The main distinctions stem from the differences in the state space in which the random walk occurs and in the manner in which the random walkers are constrained. One consequence is that in the constrained-path method the so-called mixed estimator for the energy is not an upper bound to the exact energy, as previously claimed. Several ways of producing an energy upper bound are given, and relevant methodological aspects are illustrated with simple examples. {copyright} {ital 1999} {ital The American Physical Society}

Carlson, J.; Gubernatis, J.E.; Ortiz, G. [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)] [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Zhang, S. [Departments of Physics and Applied Science, College of William and Mary, Williamsburg, Virginia 23187 (United States)] [Departments of Physics and Applied Science, College of William and Mary, Williamsburg, Virginia 23187 (United States)

1999-05-01

329

Shell Model Monte Carlo method in the $pn$-formalism and applications to the Zr and Mo isotopes

We report on the development of a new shell-model Monte Carlo algorithm which uses the proton-neutron formalism. Shell model Monte Carlo methods, within the isospin formulation, have been successfully used in large-scale shell-model calculations. Motivation for this work is to extend the feasibility of these methods to shell-model studies involving non-identical proton and neutron valence spaces. We show the viability of the new approach with some test results. Finally, we use a realistic nucleon-nucleon interaction in the model space described by (1p_1/2,0g_9/2) proton and (1d_5/2,2s_1/2,1d_3/2,0g_7/2,0h_11/2) neutron orbitals above the Sr-88 core to calculate ground-state energies, binding energies, B(E2) strengths, and to study pairing properties of the even-even 90-104 Zr and 92-106 Mo isotope chains.

C. Ozen; D. J. Dean

2005-08-05

330

Exact ground state Monte Carlo method for Bosons without importance sampling.

Generally "exact" quantum Monte Carlo computations for the ground state of many bosons make use of importance sampling. The importance sampling is based either on a guiding function or on an initial variational wave function. Here we investigate the need of importance sampling in the case of path integral ground state (PIGS) Monte Carlo. PIGS is based on a discrete imaginary time evolution of an initial wave function with a nonzero overlap with the ground state, which gives rise to a discrete path which is sampled via a Metropolis-like algorithm. In principle the exact ground state is reached in the limit of an infinite imaginary time evolution, but actual computations are based on finite time evolutions and the question is whether such computations give unbiased exact results. We have studied bulk liquid and solid (4)He with PIGS by considering as initial wave function a constant, i.e., the ground state of an ideal Bose gas. This implies that the evolution toward the ground state is driven only by the imaginary time propagator, i.e., there is no importance sampling. For both phases we obtain results converging to those obtained by considering the best available variational wave function (the shadow wave function) as initial wave function. Moreover we obtain the same results even by considering wave functions with the wrong correlations, for instance, a wave function of a strongly localized Einstein crystal for the liquid phase. This convergence is true not only for diagonal properties such as the energy, the radial distribution function, and the static structure factor, but also for off-diagonal ones, such as the one-body density matrix. This robustness of PIGS can be traced back to the fact that the chosen initial wave function acts only at the beginning of the path without affecting the imaginary time propagator. From this analysis we conclude that zero temperature PIGS calculations can be as unbiased as those of finite temperature path integral Monte Carlo. On the other hand, a judicious choice of the initial wave function greatly improves the rate of convergence to the exact results. PMID:20568848

Rossi, M; Nava, M; Reatto, L; Galli, D E

2009-10-21

331

Quantum Monte Carlo Helsinki 2011

Quantum Monte Carlo Helsinki 2011 Marius Lewerenz MSME/CT, UMR 8208 CNRS, UniversitÂ´e Paris Est? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 What is a Monte Carlo method? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 What are Monte Carlo methods good for? . . . . . . . . . . . . . . . . . . . . . . . 5 1

Boyer, Edmond

332

Assesing multileaf collimator effect on the build-up region using Monte Carlo method

NASA Astrophysics Data System (ADS)

Previous Monte Carlo studies have investigated the multileaf collimator (MLC) contribution to the build-up region for fields in which the MLC leaves were fully blocking the openings defined by the collimation jaws. In the present work, we investigate the same effect but for symmetric and asymmetric MLC defined field sizes (2×2, 4×4, 10×10 and 3×7 cm2). A Varian 2100C/D accelerator with 120-leaf MLC is accurately modeled for a 6MV photon beam using the BEAMnrc/EGSnrc code. Our results indicate that particles scattered from accelerator head and MLC are responsible for the increase of about 7% on the surface dose when comparing 2×2 and 10×10 cm2 fields. We found that the MLC contribution to the total build-up dose is about 2% for the 2×2 cm2 field and less than 1% for the largest fields.

Moreno, M. Zarza; Teixeira, N.; Jesus, A. P.; Mora, G.

2008-01-01

333

Simulation of Cone Beam CT System Based on Monte Carlo Method

Adaptive Radiation Therapy (ART) was developed based on Image-guided Radiation Therapy (IGRT) and it is the trend of photon radiation therapy. To get a better use of Cone Beam CT (CBCT) images for ART, the CBCT system model was established based on Monte Carlo program and validated against the measurement. The BEAMnrc program was adopted to the KV x-ray tube. Both IOURCE-13 and ISOURCE-24 were chosen to simulate the path of beam particles. The measured Percentage Depth Dose (PDD) and lateral dose profiles under 1cm water were compared with the dose calculated by DOSXYZnrc program. The calculated PDD was better than 1% within the depth of 10cm. More than 85% points of calculated lateral dose profiles was within 2%. The correct CBCT system model helps to improve CBCT image quality for dose verification in ART and assess the CBCT image concomitant dose risk.

Wang, Yu; Cao, Ruifen; Hu, Liqin; Li, Bingbing

2014-01-01

334

A Markov-Chain Monte-Carlo Based Method for Flaw Detection in Beams

A Bayesian inference methodology using a Markov Chain Monte Carlo (MCMC) sampling procedure is presented for estimating the parameters of computational structural models. This methodology combines prior information, measured data, and forward models to produce a posterior distribution for the system parameters of structural models that is most consistent with all available data. The MCMC procedure is based upon a Metropolis-Hastings algorithm that is shown to function effectively with noisy data, incomplete data sets, and mismatched computational nodes/measurement points. A series of numerical test cases based upon a cantilever beam is presented. The results demonstrate that the algorithm is able to estimate model parameters utilizing experimental data for the nodal displacements resulting from specified forces.

Glaser, R E; Lee, C L; Nitao, J J; Hickling, T L; Hanley, W G

2006-09-28

335

Modeling of radiation-induced bystander effect using Monte Carlo methods

NASA Astrophysics Data System (ADS)

Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.

Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun

2009-03-01

336

The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory, Monte Carlo, or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales, and timescales from those characteristic of diffusion to long-term microstructural evolution (~?s to years). Although the rate theory and Monte Carlo models can be used simulate the same phenomena, some of the details are handled quite differently in the two approaches. Models employing the rate theory have been extensively used to describe radiation-induced phenomena such as void swelling and irradiation creep. The primary approximations in such models are time- and spatial averaging of the radiation damage source term, and spatial averaging of the microstructure into an effective medium. Kinetic Monte Carlo models can account for these spatial and temporal correlations; their primary limitation is the computational burden which is related to the size of the simulation cell. A direct comparison of RT and object kinetic MC simulations has been made in the domain of point defect cluster dynamics modeling, which is relevant to the evolution (both nucleation and growth) of radiation-induced defect structures. The primary limitations of the OKMC model are related to computational issues. Even with modern computers, the maximum simulation cell size and the maximum dose (typically much less than 1 dpa) that can be simulated are limited. In contrast, even very detailed RT models can simulate microstructural evolution for doses up 100 dpa or greater in clock times that are relatively short. Within the context of the effective medium, essentially any defect density can be simulated. Overall, the agreement between the two methods is best for irradiation conditions which produce a high density of defects (lower temperature and higher displacement rate), and for materials that have a relatively high density of fixed sinks such as dislocations.

Stoller, Roger E [ORNL; Golubov, Stanislav I [ORNL; Becquart, C. S. [Universite de Lille; Domain, C. [EDF R& D, Clamart, France

2007-08-01

337

The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the magnitude of biases in an estimator that ignores the effects of an unequal probability sample design. PMID:25333423

Williams, Michael S; Ebel, Eric D

2014-11-18

338

NSDL National Science Digital Library

The STP MonteCarloEstimation program estimates the area under the curve given by the square-root of (1-x^2) between 0 and 1 using the Monte Carlo hit and miss method. STP MonteCarloEstimation is part of a suite of Open Source Physics programs that model aspects of Statistical and Thermal Physics (STP). The program is distributed as a ready-to-run (compiled) Java archive. Double clicking the stp_MonteCarloEstimation.jar file will run the program if Java is installed on your computer.

Gould, Harvey; Tobochnik, Jan; Christian, Wolfgang; Cox, Anne

2009-01-26

339

A fast 3D image reconstruction method based on Monte-Carlo simulation for laminar optical tomography

NASA Astrophysics Data System (ADS)

Laminar optical tomography (LOT) is a new mesoscopic functional optical imaging technique. Currently, the forward problem of LOT image reconstruction is generally solved on the basis of Monte-Carlo (MC) methods. However, considering the nonlinear nature of the image reconstruction in LOT, with the increasing number of source positions, methods based on MC takes too much computation time. Based on the scheme of trajectory translation and target voxel regression (TT&TVR) proposed by our group, this paper develops a fast 3D image reconstruction algorithm. The algorithm is applied to the absorption reconstruction of the layered inhomogeneous media. Results demonstrate that the reconstructing time is less than 15min with the X-Y-Z section of the sample subdivided into 50 × 50 × 10 voxels, and the target size and quantitativeness ratio can be obtained in a satisfying accuracy.

Jia, Mengyu; Cui, Shanshan; Chen, Xueying; Zhao, Huijuan; Gao, Feng

2014-09-01

340

A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for dose calculations from 10 and 50 MeV scanned electron beams produced from a racetrack microtron. Central axis depth dose measurements and a series of profile scans at various depths were acquired in a water phantom using a Scanditronix type RK ion chamber. Source spatial distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber measurements carried out across the two-dimensional beam profile at 100 cm downstream from the source. The in-air spatial distributions were found to have full width at half maximum of 4.7 and 1.3 cm, at 100 cm from the source, for the 10 and 50 MeV beams, respectively. Energy spectra for the 10 and 50 MeV beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. DPM calculations are on average within +/- 2% agreement with measurement for all depth dose and profile comparisons conducted in this study. The accuracy of the DPM code illustrated in this work suggests that DPM may be used as a valuable tool for electron beam dose calculations. PMID:12094973

Chetty, Indrin J; Moran, Jean M; McShan, Daniel L; Fraass, Benedick A; Wilderman, Scott J; Bielajew, Alex F

2002-06-01

341

NASA Astrophysics Data System (ADS)

The goal of this study was to quantify, in a heterogeneous phantom, the difference between experimentally measured beam profiles and those calculated using both a commercial convolution algorithm and the Monte Carlo (MC) method. This was done by arranging a phantom geometry that incorporated a vertical solid water-lung material interface parallel to the beam axis. At nominal x-ray energies of 6 and 18 MV, dose distributions were modelled for field sizes of 10 × 10 cm2 and 4 × 4 cm2 using the CadPlan 6.0 commercial treatment planning system (TPS) and the BEAMnrc-DOSXYZnrc Monte Carlo package. Beam profiles were found experimentally at various depths using film dosimetry. The results showed that within the lung region the TPS had a substantial problem modelling the dose distribution. The (film-TPS) profile difference was found to increase, in the lung region, as the field size decreased and the beam energy increased; in the worst case the difference was more than 15%. In contrast, (film-MC) profile differences were not found to be affected by the material density difference. BEAMnrc-DOSXYZnrc successfully modelled the material interface and dose profiles to within 2%.

Cranmer-Sargison, G.; Beckham, W. A.; Popescu, I. A.

2004-04-01

342

Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)

Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V [Department of Optics and Biomedical Physics, N.G.Chernyshevskii Saratov State University (Russian Federation)

2006-12-31

343

NASA Astrophysics Data System (ADS)

Interface roughness strongly influences the performance of germanium metal-organic-semiconductor field effect transistors (MOSFETs). In this paper, a 2D full-band Monte Carlo simulator is used to study the impact of interface roughness scattering on electron and hole transport properties in long- and short- channel Ge MOSFETs inversion layers. The carrier effective mobility in the channel of Ge MOSFETs and the in non-equilibrium transport properties are investigated. Results show that both electron and hole mobility are strongly influenced by interface roughness scattering. The output curves for 50 nm channel-length double gate n and p Ge MOSFET show that the drive currents of n- and p-Ge MOSFETs have significant improvement compared with that of Si n- and p-MOSFETs with smooth interface between channel and gate dielectric. The 82% and 96% drive current enhancement are obtained for the n- and p-MOSFETs with the completely smooth interface. However, the enhancement decreases sharply with the increase of interface roughness. With the very rough interface, the drive currents of Ge MOSFETs are even less than that of Si MOSFETs. Moreover, the significant velocity overshoot also has been found in Ge MOSFETs.

Du, Gang; Liu, Xiao-Yan; Xia, Zhi-Liang; Yang, Jing-Feng; Han, Ru-Qi

2010-05-01

344

Atmospheric correction of Earth-observation remote sensing images by Monte Carlo method

NASA Astrophysics Data System (ADS)

In earth observation, the atmospheric particles contaminate severely, through absorption and scattering, the reflected electromagnetic signal from the earth surface. It will be greatly beneficial for land surface characterization if we can remove these atmospheric effects from imagery and retrieve surface reflectance that characterizes the surface properties with the purpose of atmospheric correction. Giving the geometric parameters of the studied image and assessing the parameters describing the state of the atmosphere, it is possible to evaluate the atmospheric reflectance, and upward and downward transmittances which take part in the garbling data obtained from the image. To that end, an atmospheric correction algorithm for high spectral resolution data over land surfaces has been developed. It is designed to obtain the main atmospheric parameters needed in the image correction and the interpretation of optical observations. It also estimates the optical characteristics of the Earth-observation imagery (LANDSAT and SPOT). The physics underlying the problem of solar radiation propagations that takes into account multiple scattering and sphericity of the atmosphere has been treated using Monte Carlo techniques.

Hadjit, Hanane; Oukebdane, Abdelaziz; Belbachir, Ahmad Hafid

2013-10-01

345

Dose optimization in 125I permanent prostate seed implants using the Monte Carlo method

NASA Astrophysics Data System (ADS)

The aim of this work consisted in using the Monte Carlo code MCNP and computational phantoms to assess the absorbed dose distributions in the prostate, due to a radiotherapy treatment using 125I radioactive seeds. The intention was to develop a tool that can serve as a complement of the treatment planning system of radiotherapy procedures, reproducing accurately the exact geometry of the sources and the composition of the media where the seeds are inserted. The radiation activities of the simulated seeds varied from 0.27 mCi to 0.38 mCi, for hypothetical treatments employing 80, 88 or 100 125I sources, typical parameters for this technique. The prostate volumes where the seeds were virtually inserted were simulated with spherical or voxel computational phantoms. The configuration containing 88 seeds with initial radiation activity of 0.27 mCi resulted in a final absorbed dose near 144 Gy, in accordance with the recommendations of the American Association of Physicists in Medicine (AAPM). Based on this configuration, it was possible to obtain the radiation absorbed dose distributions for the voxel phantom, which allowed the determination of treatment quality indicators. The obtained results are in good agreement with experimental data presented by other authors.

Reis, Juraci P.; Menezes, Artur F.; Souza, Edmilson M.; Facure, Alessandro; Medeiros, Jose A. C. C.; Silva, Ademir X.

2012-04-01

346

NASA Astrophysics Data System (ADS)

Biotransformation of dissolved groundwater hydrocarbon plumes emanating from leaking underground fuel tanks should, in principle, result in plume length stabilization over relatively short distances, thus diminishing the environmental risk. However, because the behavior of hydrocarbon plumes is usually poorly constrained at most leaking underground fuel tank sites in terms of release history, groundwater velocity, dispersion, as well as the biotransformation rate, demonstrating such a limitation in plume length is problematic. Biotransformation signatures in the aquifer geochemistry, most notably elevated bicarbonate, may offer a means of constraining the relationship between plume length and the mean biotransformation rate. In this study, modeled plume lengths and spatial bicarbonate differences among a population of synthetic hydrocarbon plumes, generated through Monte Carlo simulation of an analytical solute transport model, are compared to field observations from six underground storage tank (UST) sites at military bases in California. Simulation results indicate that the relationship between plume length and the distribution of bicarbonate is best explained by biotransformation rates that are consistent with ranges commonly reported in the literature. This finding suggests that bicarbonate can indeed provide an independent means for evaluating limitations in hydrocarbon plume length resulting from biotransformation.

McNab, Walt W.

2001-02-01

347

NASA Astrophysics Data System (ADS)

In many cases, the uncertainty of output quantities may be computed by assuming that the distribution represented by the result of measurement and its associated standard uncertainty is a Gaussian. This assumption may be unjustified and the uncertainty of the output quantities determined in this way may be incorrect. One tool to deal with different distribution functions of the input parameters and the resulting mixed-distribution of the output quantities is given through the Monte Carlo techniques. The resulting empirical distribution can be used to approximate the theoretical distribution of the output quantities. All required moments of different orders can then be numerically determined. To evaluate the procedure of derivation and evaluation of output parameter uncertainties outlined in this paper, a case study of kinematic terrestrial laserscanning (k-TLS) will be discussed. This study deals with two main topics: the refined simulation of different configurations by taking different input parameters with diverse probability functions for the uncertainty model into account, and the statistical analysis of the real data in order to improve the physical observation models in case of k-TLS. The solution of both problems is essential for the highly sensitive and physically meaningful application of k-TLS techniques for monitoring of, e. g., large structures such as bridges.

Alkhatib, Hamza; Kutterer, Hansjörg

2013-05-01

348

Specific Absorbed Fractions of Electrons and Photons for Rad-HUMAN Phantom Using Monte Carlo Method

The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photon and electron were calculated using the Rad-HUMAN phantom, a computational voxel phantom of Chinese adult female and created using the color photographic image of the Chinese Visible Human (CVH) data set. The model can represent most of Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10keV to 4MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from Rad-HUMAN have the similar trends but larger than those from the other two models. The differences were due to the racial and anatomical differences in o...

Wang, Wen; Long, Peng-cheng; Hu, Li-qin

2014-01-01

349

Exploring Ground States of Quantum Spin Glasses by Quantum Monte Carlo Method

NASA Astrophysics Data System (ADS)

Quantum phases in frustrated systems are being intensively investigated these days; in particular in the context of quantum spin glass and quantum ANNNI models [1]. Here we study a fully frustrated quantum antiferromagnetic model with disorder superposed on it. The finite temperature properties of sub-lattice decomposed version of this model was already considered earlier [2, 3]. The quantum phase transition and entanglement properties of the full long-range model at zero temperature was studied by Vidal et al. [4]. Here we present some results obtained by applying quantum Monte Carlo technique [6, 13] to the same full long-range model at finite temperature. We observe indications of a very unstable quantum antiferromagnetic (AF) phase (50% spin up, 50% spin down, without any sub-lattice structure) in the LRIAF model, where the antiferromagnetically ordered phase gets destabilized by both infinitesimal thermal (classical) as well as quantum fluctuations (due to tunneling or transverse field) and the system becomes disordered or goes over to the para phase [7].

Chandra, A. K.; Das, A.; Inoue, J.; Chakrabarti, B. K.

350

We propose a new Complex Diffusion Monte Carlo (CDMC) method for the simulation of quantum systems with complex wave function. In CDMC the modulus and phase of wave function are simulated both in contrast to other methods. We successfully test CDMC by the simulation of the ground state for 2D electron in magnetic field and 2D fermions-anyons in parabolic well.

B. Abdullaev; M. Musakhanov; A. Nakamura

2002-01-01

351

Lower and upper bounds for the absolute free energy by the hypothetical scanning Monte Carlo method The hypothetical scanning HS method is a general approach for calculating the absolute entropy S and free energy F to provide the free energy through the analysis of a single configuration. Â© 2004 American Institute

Meirovitch, Hagai

352

A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP)

NASA Astrophysics Data System (ADS)

Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.

Bitar, A.; Lisbona, A.; Thedrez, P.; Sai Maurel, C.; LeForestier, D.; Barbet, J.; Bardies, M.

2007-02-01

353

NASA Astrophysics Data System (ADS)

Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.

Nasser, Hassan; Marre, Olivier; Cessac, Bruno

2013-03-01

354

The unitary correlation operator method (UCOM) and the similarity renormalization group theory (SRG) are compared and discussed in the framework of no-core Monte Carlo shell model (MCSM) calculations for $^{3}$H and $^{4}$He. The treatment of spurious center-of-mass motion by Lawson's prescription is performed in the MCSM calculations. These results with both transformed interactions show good suppression of spurious center-of-mass motion with proper Lawson's prescription parameter $\\beta_{\\rm c.m.}$ values. The UCOM potentials obtains faster convergence of total energy for the ground state than that of SRG potentials in the MCSM calculations, which differs from the cases in the no-core shell model calculations (NCSM). This differences are discussed and analyzed in terms of the truncation scheme in the MCSM and NCSM, as well as the properties of potentials of SRG and UCOM.

Lang Liu

2014-11-22

355

The unitary correlation operator method (UCOM) and the similarity renormalization group theory (SRG) are compared and discussed in the framework of no-core Monte Carlo shell model (MCSM) calculations for $^{3}$H and $^{4}$He. The treatment of spurious center-of-mass motion by Lawson's prescription is performed in the MCSM calculations. These results with both transformed interactions show good suppression of spurious center-of-mass motion with proper Lawson's prescription parameter $\\beta_{\\rm c.m.}$ values. The UCOM potentials obtains faster convergence of total energy for the ground state than that of SRG potentials in the MCSM calculations, which differs from the cases in the no-core shell model calculations (NCSM). This differences are discussed and analyzed in terms of the truncation scheme in the MCSM and NCSM, as well as the properties of potentials of SRG and UCOM.

Liu, Lang

2014-01-01

356

NASA Astrophysics Data System (ADS)

The purpose of this paper was to study the source model for a Monte Carlo simulation of electron beams from a medical linear accelerator. In a prior study, a non-divergent Gaussian source with a full-width at half-maximum (FWHM) of 0.15 cm was successful in predicting relative dose distributions for electron beams with applicators. However, for large fields with the applicator removed, discrepancies were found between measured and calculated profiles, particularly in the shoulder region. In this work, the source was changed to a divergent Gaussian spatial distribution and the FWHM parameter was varied to produce better agreement with measured data. The influence of the FWHM source parameter on profiles was observed at multiple locations in the simulation geometry including in-air fluence profiles at a 95 cm source-to-surface distance (SSD), percent depth dose profiles and off-axis profiles (OARs) in a water phantom for two SSDs, 80 and 100 cm. For a 6 MeV 40 × 40 cm2 OAR profile, discrepancies in the shoulder region were reduced from 15% to 4% using a FWHM value of 0.45 cm. The optimal FWHM values for the other energies were 0.45 cm for 9 MeV, 0.22 for 12 MeV, 0.25 for 16 MeV and 0.2 cm for 20 MeV. Although this range of values was larger than measured focal spot sizes reported by other researchers, using the increased FWHM values improved the fit at most locations in the simulation geometry, giving confidence that the model could be used with a variety of SSDs and field sizes.

Weinberg, Rebecca; Antolak, John A.; Starkschall, George; Kudchadker, Rajat J.; White, R. Allen; Hogstrom, Kenneth R.

2009-01-01

357

The paper describes a number of test cases designed to provide verification of different aspects of two numerical methods, the YIX and Monte Carlo methods, for predicting radiative transfer in participating media, and to provide results which could serve as a set of benchmark solutions for comparison with future methods. The three-dimensional media evaluated in this study include a nonhomogeneous radiative property distribution, a hot emitting wall and anisotropic scattering. Results for the solution of surface flux and radiative flux divergence for the unit cube with cold walls produced by the two methods agree within 1%. Computational times are quite different for the two methods with the YIX being the faster of the two methods. The CPU times of both methods differ by the factor ranging from about 1,200 (case E1) to 40 (case E2). For the solution of emissive power in the same cube with one hot wall, agreement in results depended on the medium optical thickness. For instance, in some of the cases the YIX method suffered from ray effects which reduced the solution accuracy. In these cases the average deviations were approximately 6% in both surface flux and medium emissive power solutions. In optically thicker media the ray effects are not present and the YIX produces solutions on a finer grid to reduce numerical error. A unique blocking effect was identified for certain nonhomogeneous media evaluated in the course of this study. Although the media used in this study were created for numerical comparison purposes, with limited physical resemblance to real problems, these blocking effects can have real physical implications. They are presented here to demonstrate the complexity of a seemingly simple problem and to provide more data for comparison with future methods. The blocking effect is typically not observable in homogeneous media.

Hsu, P.F. [Florida Inst. of Tech., Melbourne, FL (United States). Mechanical and Aerospace Engineering Programs; Farmer, J.T. [NASA Langley Research Center, Hampton, VA (United States). Space Systems and Concepts Div.

1995-12-31

358

Experiments with Monte Carlo Othello

In this paper, we report on our experiments with using Monte Carlo simulation (specifically the UCT algorithm) as the basis for an Othello playing program. Monte Carlo methods have been used for other games in the past, most recently and notably in successful Go playing programs. We show that Monte Carlo-based players have potential for Othello, and that evolutionary algorithms

Philip Hingston; Martin Masek

2007-01-01

359

Monte Carlo photon benchmark problems

Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems.

Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.

1991-01-01

360

Monte Carlo photon benchmark problems

Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems. 8 refs., 5 figs.

Whalen, D.J. (Brigham Young Univ., Provo, UT (USA)); Hollowell, D.E.; Hendricks, J.S. (Los Alamos National Lab., NM (USA))

1990-01-01

361

Monte Carlo and Quasi-Monte Carlo for Art B. Owen

Monte Carlo and Quasi-Monte Carlo for Statistics Art B. Owen Abstract This article reports Monte Carlo methods can be used. There was a special emphasis on areas where Quasi-Monte Carlo ideas This survey is aimed at exposing good problems in statistics to researchers in Quasi- Monte Carlo. It has

Owen, Art

362

NASA Astrophysics Data System (ADS)

The focus of this work is to obtain the ground state energy of the non-relativistic spin-independent molecular Hamiltonian without making the Born-Oppenheimer (BO) approximation. In addition to avoiding the BO approximation, this approach avoids imposing separable-rotor and harmonic oscillator approximations. The ground state solution is obtained variationally using multi-determinant variational Monte Carlo method where all nuclei and electrons in the molecule are treated quantum mechanically. The multi-determinant VMC provides the right framework for including explicit correlation in a multi-determinant expansion. This talk will discuss the construction of the basis functions and optimization of the variational coefficient. The electron-nuclear VMC method will be applied to H2, He2 and H2O and comparison of the VMC results with other methods will be presented. The results from these calculations will provide the necessary benchmark values that are needed in development of other multicomponent method such as electron-nuclear DFT and electron-nuclear FCIQMC.

Sambasivam, Abhinanden; Elward, Jennifer; Chakraborty, Arindam

2013-03-01

363

Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.

Shi, C. Y.; Xu, X. George; Stabin, Michael G. [Department of Radiation Oncology, University of Texas Health Science Center, San Antonio, Texas 78229 (United States); Nuclear Engineering and Engineering Physics Program, Rensselaer Polytechnic Institute, Room 1-11, NES Building, Tibbits Avenue, Troy, New York 12180 (United States); Department of Radiology and Radiological Sciences, Vanderbilt University, Nashville, Tennessee 37232-2675 (United States)

2008-07-15

364

The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code MANTIS, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fastDETECT2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the MANTIS code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify PENELOPE (the open source software package that handles the x-ray and electron transport in MANTIS) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fastDETECT2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybridMANTIS approach achieves a significant speed-up factor of 627 when compared to MANTIS and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybridMANTIS, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical tox-ray transport. The new code requires much less memory than MANTIS and, asa result, allows us to efficiently simulate large area detectors. PMID:22469917

Sharma, Diksha; Badal, Andreu; Badano, Aldo

2012-04-21

365

1 Advanced statistical methods The classical way to look at uncertainties is via standard Monte to build a statistical approximation to the model, known as an emulator. Such methods have been of interest of the application of these statistical methods to fairly realistic and complex models. Figure 1: Using a simple

Tokmakian, Robin

366

The full configuration interaction quantum Monte Carlo (FCIQMC) method, as well as its "initiator" extension (i-FCIQMC), is used to tackle the complex electronic structure of the carbon dimer across the entire dissociation reaction coordinate, as a prototypical example of a strongly correlated molecular system. Various basis sets of increasing size up to the large cc-pVQZ are used, spanning a fully accessible N-electron basis of over 10(12) Slater determinants, and the accuracy of the method is demonstrated in each basis set. Convergence to the FCI limit is achieved in the largest basis with only O[10(7)] walkers within random errorbars of a few tenths of a millihartree across the binding curve, and extensive comparisons to FCI, CCSD(T), MRCI, and CEEIS results are made where possible. A detailed exposition of the convergence properties of the FCIQMC methods is provided, considering convergence with elapsed imaginary time, number of walkers and size of the basis. Various symmetries which can be incorporated into the stochastic dynamic, beyond the standard abelian point group symmetry and spin polarisation are also described. These can have significant benefit to the computational effort of the calculations, as well as the ability to converge to various excited states. The results presented demonstrate a new benchmark accuracy in basis-set energies for systems of this size, significantly improving on previous state of the art estimates. PMID:21895156

Booth, George H; Cleland, Deidre; Thom, Alex J W; Alavi, Ali

2011-08-28

367

Biotic indices have been used ot assess biological condition by dividing index scores into condition categories. Historically the number of categories has been based on professional judgement. Alternatively, statistical methods such as power analysis can be used to determine the ...

368

Monte Carlo Library Least Square (MCLLS) Method for Multiple Radioactive Particle Tracking in BPR

NASA Astrophysics Data System (ADS)

In This work, a new method of radioactive particles tracking is proposed. An accurate Detector Response Functions (DRF's) was developed from MCNP5 to generate library for NaI detectors with a significant speed-up factor of 200. This just make possible for the idea of MCLLS method which is used for locating and tracking the radioactive particle in a modular Pebble Bed Reactor (PBR) by searching minimum Chi-square values. The method was tested to work pretty good in our lab condition with a six 2" X 2" NaI detectors array only. This method was introduced in both forward and inverse ways. A single radioactive particle tracking system with three collimated 2" X 2" NaI detectors is used for benchmark purpose.

Wang, Zhijian; Lee, Kyoung; Gardner, Robin

2010-03-01

369

A mean field theory of sequential Monte Carlo methods P. Del Moral

method, evolutionary learning models, interacting stochastic grids approximations, genetic algorithms Filters Prediction Updating Genetic Algorithms Mutation Selection Evolutionary Population Exploration, cloning, pruning, enrichment, go with the winner, and many others. Pure mathematical point of view

Del Moral , Pierre

370

ATR WG-MOX Fuel Pellet Burnup Measurement by Monte Carlo - Mass Spectrometric Method

This paper presents a new method for calculating the burnup of nuclear reactor fuel, the MCWO-MS method, and describes its application to an experiment currently in progress to assess the suitability for use in light-water reactors of Mixed-OXide (MOX) fuel that contains plutonium derived from excess nuclear weapons material. To demonstrate that the available experience base with Reactor-Grade Mixed uranium-plutonium

G. S. Chang; Gray Sen I

2002-01-01

371

NASA Astrophysics Data System (ADS)

The development of tools for nuclear data uncertainty propagation in lattice calculations are presented. The Total Monte Carlo method and the Generalized Perturbation Theory method are used with the code DRAGON to allow propagation of nuclear data uncertainties in transport calculations. Both methods begin the propagation of uncertainties at the most elementary level of the transport calculation - the Evaluated Nuclear Data File. The developed tools are applied to provide estimates for response uncertainties of a PWR cell as a function of burnup.

Sabouri, P.; Bidaud, A.; Dabiran, S.; Lecarpentier, D.; Ferragut, F.

2014-04-01

372

models of neutron detection with organic scintillation detectors Sara A. Pozzia , Marek Flaskaa detection on an event-by-event basis with the Monte Carlo method and (ii) an analytical approach for neutron slowing down and detection processes. We show that the total neutron pulse height response measured

PÃ¡zsit, Imre

373

The purpose of this work is to develop and test a method to estimate the relative and absolute absorbed radiation dose from axial and spiral CT scans using a Monte Carlo approach. Initial testing was done in phantoms and preliminary results were obtained from a standard mathematical anthropomorphic model (MIRD V) and voxelized patient data. To accomplish this we have

G. Jarry; J. J. DeMarco; U. Beifuss; C. H. Cagnon; M. F. McNitt-Gray

2003-01-01

374

NASA Astrophysics Data System (ADS)

Seeking to assess the radiation risk associated with radiological examinations in neonatal intensive care units, thermo-luminescence dosimetry was used for the measurement of entrance surface dose (ESD) in 44 AP chest and 28 AP combined chest-abdominal exposures of a sample of 60 neonates. The mean values of ESD were found to be equal to 44 ± 16 µGy and 43 ± 19 µGy, respectively. The MCNP-4C2 code with a mathematical phantom simulating a neonate and appropriate x-ray energy spectra were employed for the simulation of the AP chest and AP combined chest-abdominal exposures. Equivalent organ dose per unit ESD and energy imparted per unit ESD calculations are presented in tabular form. Combined with ESD measurements, these calculations yield an effective dose of 10.2 ± 3.7 µSv, regardless of sex, and an imparted energy of 18.5 ± 6.7 µJ for the chest radiograph. The corresponding results for the combined chest-abdominal examination are 14.7 ± 7.6 µSv (males)/17.2 ± 7.6 µSv (females) and 29.7 ± 13.2 µJ. The calculated total risk per radiograph was low, ranging between 1.7 and 2.9 per million neonates, per film, and being slightly higher for females. Results of this study are in good agreement with previous studies, especially in view of the diversity met in the calculation methods.

Makri, T.; Yakoumakis, E.; Papadopoulou, D.; Gialousis, G.; Theodoropoulos, V.; Sandilos, P.; Georgiou, E.

2006-10-01

375

Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings. PMID:24339886

Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin

2013-01-01

376

Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings. PMID:24339886

Matilainen, Kaarina; Mantysaari, Esa A.; Lidauer, Martin H.; Stranden, Ismo; Thompson, Robin

2013-01-01

377

Nonpilot-Aided Sequential Monte-Carlo Method to Joint Signal, Phase Noise and

a multicarrier signal model which includes the redundancy information induced by the cyclic prefix, thus leading) caused by the oscillator instabilities [3]Â[8]. Indeed, random time-varying phase distortions destroy in the time domain [9] or in the frequency domain [10]Â[15]. All these methods require the use of pilot

Paris-Sud XI, UniversitÃ© de

378

Heterogeneous phenotypic correlations may be suggestive of underlying changes in genetic covariance among life- history, morphology, and behavioural traits, and their detection is therefore relevant to many biological studies. Two new statistical tests are proposed and their performances compared with existing methods. Of all tests considered, the existing approximate test of homogeneity of product-moment correlations provides the greatest power to

Richard P. Brown

1997-01-01

379

Markov chain Monte Carlo methods for family trees using a parallel processor

A 1024 CPU parallel computer is used to obtain simulated genotypes in the Tristan da Cunha pedigree using random local updating methods. A four-colour theorem is invoked to justify simultaneous updating. Multiple copies of the program are run simultaneously. These results are used to infer the source of the B allele of the ABO blood group that is present in

Russell Bradford; Alun Thomas

1996-01-01

380

A New Monte Carlo Filtering Method for the Diagnosis of Mission-Critical Failures

NASA Technical Reports Server (NTRS)

Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the settings most likely to cause a mission-critical failure. This research benchmarks two treatment learning methods against standard optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. It is shown that these treatment learners are both faster than traditional methods and show demonstrably better results.

Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen

2009-01-01

381

NASA Technical Reports Server (NTRS)

Sampling techniques have been used previously to evaluate Jacobian determinants that occur in classical mechanical descriptions of molecular scattering. These determinants also occur in the quasiclassical approximation. A new technique is described which can be used to evaluate Jacobian determinants which occur in either description. This method is expected to be valuable in the study of reactive scattering using the quasiclassical approximation.

La Budde, R. A.

1972-01-01

382

Testing planetary transit detection methods with grid-based Monte-Carlo simulations

The detection of extrasolar planets by means of the transit method is a rapidly growing field of modern astrophysics. The periodic light dips produced by the passage of a planet in front of its parent star can be used to reveal the presence of the planet itself, to measure its orbital period and relative radius, as well as to perform

A. S. Bonomo; A. F. Lanza

2009-01-01

383

NASA Astrophysics Data System (ADS)

Determining the optical properties of biological tissues in vivo from spectral intensity measurements performed at their surface is still a challenge. Based on spectroscopic data acquired, the aim is to solve an inverse problem, where the optical parameter values of a forward model are to be estimated through optimization procedure of some cost function. In many cases it is an ill-posed problem because of small numbers of measures, errors on experimental data, nature of a forward model output data, which may be affected by statistical noise in the case of Monte Carlo (MC) simulation or approximated values for short inter-fibre distances (for Diffusion Equation Approximation (DEA)). In case of optical biopsy, spatially resolved diffuse reflectance spectroscopy is one simple technique that uses various excitation-toemission fibre distances to probe tissue in depths. The aim of the present contribution is to study the characteristics of some classically used cost function, optimization methods (Levenberg-Marquardt algorithm) and how it is reaching global minimum when using MC and/or DEA approaches. Several methods of smoothing filters and fitting were tested on the reflectance curves, I(r), gathered from MC simulations. It was obtained that smoothing the initial data with local regression weighted second degree polynomial and then fitting the data with double exponential decay function decreases the probability of the inverse algorithm to converge to local minima close to the initial point of first guess.

Kholodtsova, Maria N.; Loschenov, Victor B.; Daul, Christian; Blondel, Walter

2014-05-01

384

Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging. PMID:24763652

Peter, Silvia; Modregger, Peter; Fix, Michael K.; Volken, Werner; Frei, Daniel; Manser, Peter; Stampanoni, Marco

2014-01-01

385

NASA Astrophysics Data System (ADS)

This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0-2.3%). The mean difference for the conformity index was 0.01 (range: 0.0-0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting.

Yeh, C. Y.; Lee, C. C.; Chao, T. C.; Lin, M. H.; Lai, P. A.; Liu, F. H.; Tung, C. J.

2014-02-01

386

Heterogeneous phenotypic correlations may be suggestive of underlying changes in genetic covariance among life-history, morphology,\\u000a and behavioural traits, and their detection is therefore relevant to many biological studies. Two new statistical tests are\\u000a proposed and their performances compared with existing methods. Of all tests considered, the existing approximate test of\\u000a homogeneity of product-moment correlations provides the greatest power to detect

Richard P. Brown

1997-01-01

387

Classification using standard statistical methods such as linear discriminant analysis (LDA) or logistic regression (LR) presume knowledge of group membership prior to the development of an algorithm for prediction. However, in many real world applications members of the same nominal group, might in fact come from different subpopulations on the underlying construct. For example, individuals diagnosed with depression will not all have the same levels of this disorder, though for the purposes of LDA or LR they will be treated in the same manner. The goal of this simulation study was to examine the performance of several methods for group classification in the case where within group membership was not homogeneous. For example, suppose there are 3 known groups but within each group two unknown classes. Several approaches were compared, including LDA, LR, classification and regression trees (CART), generalized additive models (GAM), and mixture discriminant analysis (MIXDA). Results of the study indicated that CART and mixture discriminant analysis were the most effective tools for situations in which known groups were not homogeneous, whereas LDA, LR, and GAM had the highest rates of misclassification. Implications of these results for theory and practice are discussed. PMID:24904445

Finch, W Holmes; Bolin, Jocelyn H; Kelley, Ken

2014-01-01

388

Classification using standard statistical methods such as linear discriminant analysis (LDA) or logistic regression (LR) presume knowledge of group membership prior to the development of an algorithm for prediction. However, in many real world applications members of the same nominal group, might in fact come from different subpopulations on the underlying construct. For example, individuals diagnosed with depression will not all have the same levels of this disorder, though for the purposes of LDA or LR they will be treated in the same manner. The goal of this simulation study was to examine the performance of several methods for group classification in the case where within group membership was not homogeneous. For example, suppose there are 3 known groups but within each group two unknown classes. Several approaches were compared, including LDA, LR, classification and regression trees (CART), generalized additive models (GAM), and mixture discriminant analysis (MIXDA). Results of the study indicated that CART and mixture discriminant analysis were the most effective tools for situations in which known groups were not homogeneous, whereas LDA, LR, and GAM had the highest rates of misclassification. Implications of these results for theory and practice are discussed. PMID:24904445

Finch, W. Holmes; Bolin, Jocelyn H.; Kelley, Ken

2014-01-01

389

A simple calculation method for estimating gamma-ray skyshine dose rates has been developed on the basis of the line beam response function (LBRF). The new data of LBRFs were generated by the single scattering with point kernel technique (single-scattering method). These resulting LBRFs were compared with the EGS4 and MCNP Monte Carlo calculations. The values of the new LBRFs for

Yoshiko HARIMA; Hideo HIRAYAMA; Yukio SAKAMOTO; Naohiro KUROSAWA; Makoto NEMOTO

1997-01-01

390

Simulation of 12C+12C elastic scattering at high energy by using the Monte Carlo method

NASA Astrophysics Data System (ADS)

The Monte Carlo method is used to simulate the 12C+12C reaction process. Taking into account the size of the incident 12C beam spot and the thickness of the 12C target, the distributions of scattered 12C on the MWPC and the CsI detectors at a detective distance have been simulated. In order to separate elastic scattering from the inelastic scattering with 4.4 MeV excited energy, we set several variables: the kinetic energy of incident 12C, the thickness of the 12C target, the ratio of the excited state, the wire spacing of the MWPC, the energy resolution of the CsI detector and the time resolution of the plastic scintillator. From the simulation results, the preliminary establishment of the experiment system can be determined to be that the beam size of the incident 12C is phi5 mm, the incident kinetic energy is 200-400 A MeV, the target thickness is 2 mm, the ratio of the excited state is 20%, the flight distance of scattered 12C is 3 m, the energy resolution of the CsI detectors is 1%, the time resolution of the plastic scintillator is 0.5%, and the size of the CsI detectors is 7 cm×7 cm, and we need at least 16 CsI detectors to cover a 0° to 5° angular distribution.

Guo, Chen-Lei; Zhang, Gao-Long; Tanihata, I.; Le, Xiao-Yun

2012-03-01

391

Two-dimensional spatial intensity distributions of diffuse scattering of near-infrared laser radiation from a strongly scattering medium, whose optical properties are close to those of skin, are obtained using Monte Carlo simulation. The medium contains a cylindrical inhomogeneity with the optical properties, close to those of blood. It is shown that stronger absorption and scattering of light by blood compared to the surrounding medium leads to the fact that the intensity of radiation diffusely reflected from the surface of the medium under study and registered at its surface has a local minimum directly above the cylindrical inhomogeneity. This specific feature makes the method of spatially-resolved reflectometry potentially applicable for imaging blood vessels and determining their sizes. It is also shown that blurring of the vessel image increases almost linearly with increasing vessel embedment depth. This relation may be used to determine the depth of embedment provided that the optical properties of the scattering media are known. The optimal position of the sources and detectors of radiation, providing the best imaging of the vessel under study, is determined. (biophotonics)

Bykov, A V; Priezzhev, A V; Myllylae, Risto A

2011-06-30

392

We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning. PMID:22036894

Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T

2011-11-21

393

NASA Astrophysics Data System (ADS)

The aim of the present study was to propose a comprehensive method for PET scanners image quality assessment, by the simulation of a thin layer chromatography (TLC) flood source with a previous validated Monte-Carlo (MC) model. The model was developed by using the GATE MC package and reconstructed images were obtained using the STIR software, with cluster computing. The PET scanner simulated was the GE Discovery-ST. The TLC source was immersed in 18F-FDG bath solution (1MBq) in order to assess image quality. The influence of different scintillating crystals on PET scanner's image quality, in terms of the MTF, the NNPS and the DQE, was investigated. Images were reconstructed by the commonly used FBP2D, FPB3DRP and the OSMAPOSL (15 subsets, 3 iterations) reprojection algorithms. The PET scanner configuration, incorporating LuAP crystals, provided the optimum MTF values in both 2D and 3DFBP whereas the corresponding configuration with BGO crystals was found with the higher MTF values after OSMAPOSL. The scanner incorporating BGO crystals were also found with the lowest noise levels and the highest DQE values after all image reconstruction algorithms. The plane source can be also useful for the experimental image quality assessment of PET and SPECT scanners in clinical practice.

Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Kalyvas, N. I.; Valais, I. G.; Kandarakis, I. S.; Panayiotakis, G. S.

2014-03-01

394

Towards Monte Carlo Simulations on Large Nuclei ï¿½ August 2014 Towards Monte Carlo Simulations published method to compute properties on neutron matter using variational Monte Carlo simulations published a method of performing variational Monte Carlo calculations on neutron matter comprised of up

Washington at Seattle, University of - Department of Physics, Electroweak Interaction Research Group

395

AN ASSESSMENT OF MCNP WEIGHT WINDOWS

The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight windows as well as the former MCNP4B treatment? (2) Does the new MCNP4C weight window generator generate importance functions as well as MCNP4B? (3) How do superimposed mesh weight windows compare to cell-based weight windows? (4) What are the shortcomings of the new MCNP4C weight window generator? Our assessment was carried out with five neutron and photon shielding problems chosen for their demanding variance reduction requirements. The problems were an oil well logging problem, the Oak Ridge fusion shielding benchmark problem, a photon skyshine problem, an air-over-ground problem, and a sample problem for variance reduction.

J. S. HENDRICKS; C. N. CULBERTSON

2000-01-01

396

Quantum Gibbs ensemble Monte Carlo

NASA Astrophysics Data System (ADS)

We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of 4He in two dimensions.

Fantoni, Riccardo; Moroni, Saverio

2014-09-01

397

The tracing algorithm that is implemented in the geometrical module of Monte-Carlo transport code MCU is applied to calculate the volume fractions of original materials by spatial cells of the mesh that overlays problem geometry. In this way the 3D combinatorial geometry presentation of the problem geometry, used by MCU code, is transformed to the user defined 2D or 3D bit-mapped ones. Next, these data are used in the volume fraction (VF) method to approximate problem geometry by introducing additional mixtures for spatial cells, where a few original materials are included. We have found that in solving realistic 2D and 3D core problems a sufficiently fast convergence of the VF method takes place if the spatial mesh is refined. Virtually, the proposed variant of implementation of the VF method seems as a suitable geometry interface between Monte-Carlo and S{sub n} transport codes. (authors)

Gurevich, M. I.; Oleynik, D. S. [RRC Kurchatov Inst., Kurchatov Sq., 1, 123182, Moscow (Russian Federation); Russkov, A. A.; Voloschenko, A. M. [Keldysh Inst. of Applied Mathematics, Miusskaya Sq., 4, 125047, Moscow (Russian Federation)

2006-07-01

398

Modeling kinetics of a large-scale fed-batch CHO cell culture by Markov chain Monte Carlo method.

Markov chain Monte Carlo (MCMC) method was applied to model kinetics of a fed-batch Chinese hamster ovary cell culture process in 5,000-L bioreactors. The kinetic model consists of six differential equations, which describe dynamics of viable cell density and concentrations of glucose, glutamine, ammonia, lactate, and the antibody fusion protein B1 (B1). The kinetic model has 18 parameters, six of which were calculated from the cell culture data, whereas the other 12 were estimated from a training data set that comprised of seven cell culture runs using a MCMC method. The model was confirmed in two validation data sets that represented a perturbation of the cell culture condition. The agreement between the predicted and measured values of both validation data sets may indicate high reliability of the model estimates. The kinetic model uniquely incorporated the ammonia removal and the exponential function of B1 protein concentration. The model indicated that ammonia and lactate play critical roles in cell growth and that low concentrations of glucose (0.17 mM) and glutamine (0.09 mM) in the cell culture medium may help reduce ammonia and lactate production. The model demonstrated that 83% of the glucose consumed was used for cell maintenance during the late phase of the cell cultures, whereas the maintenance coefficient for glutamine was negligible. Finally, the kinetic model suggests that it is critical for B1 production to sustain a high number of viable cells. The MCMC methodology may be a useful tool for modeling kinetics of a fed-batch mammalian cell culture process. PMID:19834967

Xing, Zizhuo; Bishop, Nikki; Leister, Kirk; Li, Zheng Jian

2010-01-01

399

Recently a considerable interest has been triggered in the investigation of the composition of individual particles by X-ray fluorescence microanalysis. The sources of these micro-samples are mostly diversified. These samples come from space dust, air and ash, soil as well as environment and take the shape of a sphere or an oval. In analysis this kind of samples the geometrical effects caused by different sizes and shapes influence on accuracy of results. This fact arises from the matrix effect. For these samples it is not possible to find analytically a solution of equation taking into account an absorption of X-rays. Hence, a way out is to approximate the real sample shape with the other one or to use Monte Carlo (MC) simulation method. In current work authors utilized the iterative MC simulation to assess an elemental percentage of individual particles. The set of glass micro-spheres, made of NIST K3089 material of known chemical composition, with diameters in the range between 25 and 45 {mu}m was investigated. The microspheres were scanned with X-ray tube primary radiation. Results of MC simulation were compared with these of some analytical approaches based on particle shape approximation. An investigation showed that the low-Z elements (Si, Ca, Ti) were the most sensitive on changes of particle shape and sizes. For high-Z elements (Fe--Pb) concentrations were nearly equal regardless of method used. However, for the all elements considered, results of MC simulation were more accurate then these of analytical relationships taken into comparison.

Czyzycki, Mateusz; Lankosz, Marek [AGH-University of Science and Technology, Faculty of Physics and Applied Computer Science, Al. Mickiewicza 30, 30-059 Krakow (Poland); Bielewski, Marek [AGH-University of Science and Technology, Faculty of Physics and Applied Computer Science, Al. Mickiewicza 30, 30-059 Krakow (Poland); European Commission, Joint Research Centre, Institute for Transuranium Elements, P.O. Box 2340, D-76125 Karlsruhe (Germany)

2010-04-06

400

Recent upgrades of the MCNPX Monte Carlo code include transport of heavy ions. We employed the new code to simulate the energy and dose distributions produced by carbon beams in rabbit's head in and around a brain tumor. The work was within our experimental technique of interlaced carbon microbeams, which uses two 90 deg. arrays of parallel, thin planes of carbon beams (microbeams) interlacing to produce a solid beam at the target. A similar version of the method was earlier developed with synchrotron-generated x-ray microbeams. We first simulated the Bragg peak in high density polyethylene and other materials, where we could compare the calculated carbon energy deposition to the measured data produced at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL). The results showed that new MCNPX code gives a reasonable account of the carbon beam's dose up to {approx}200 MeV/nucleon beam energy. At higher energies, which were not relevant to our project, the model failed to reproduce the Bragg-peak's extent of increasing nuclear breakup tail. In our model calculations we determined the dose distribution along the beam path, including the angular straggling of the microbeams, and used the data for determining the optimal values of beam spacing in the array for producing adequate beam interlacing at the target. We also determined, for the purpose of Bragg-peak spreading at the target, the relative beam intensities of the consecutive exposures with stepwise lower beam energies, and simulated the resulting dose distribution in the spread out Bragg-peak. The details of the simulation methods used and the results obtained are presented.

Dioszegi, I. [Nonproliferation and National Security Department, Brookhaven National Laboratory, Upton, New York 11973 (United States); Rusek, A.; Chiang, I. H. [NASA Space Radiation Laboratory, Brookhaven National Laboratory, Upton, NY 11973 (United States); Dane, B. R. [Medical School, State University of New York at Stony Brook, Stony Brook, NY 11794 (United States); Meek, A. G. [Department of Radiation Oncology, State University of New York at Stony Brook, Stony Brook, NY 11794 (United States); Dilmanian, F. A. [Department of Radiation Oncology, State University of New York at Stony Brook, Stony Brook, NY 11794 (United States); Medical Department, Brookhaven National Laboratory, Upton, NY 11973 (United States)

2011-06-01

401

NASA Astrophysics Data System (ADS)

X-ray absorption and X-ray fluorescence properties of medical imaging scintillating screens were studied by Monte Carlo methods as a function of the incident photon energy and screen-coating thickness. The scintillating materials examined were Gd 2O 2S, (GOS) Gd 2SiO 5 (GSO) YAlO 3 (YAP), Y 3Al 5O 12 (YAG), LuSiO 5 (LSO), LuAlO 3 (LuAP) and ZnS. Monoenergetic photon exposures were modeled in the range from 10 to 100 keV. The corresponding ranges of coating thicknesses of the investigated scintillating screens ranged up to 200 mg cm -2. Results indicated that X-ray absorption and X-ray fluorescence are affected by the incident photon energy and the screen's coating thickness. Regarding incident photon energy, this X-ray absorption and fluorescence was found to exhibit very intense changes near the corresponding K edge of the heaviest element in the screen's scintillating material. Regarding coating thickness, thicker screens exhibited higher X-ray absorption and X-ray fluorescence. Results also indicated that a significant fraction of the generated X-ray fluorescent quanta escape from the scintillating screen. This fraction was found to increase with screen's coating thickness. At the energy range studied, most of the incident photons were found to be absorbed via one-hit photoelectric effect. As a result, the reabsorption of scattered radiation was found to be of rather minor importance; nevertheless this was found to increase with the screen's coating thickness. Differences in X-ray absorption and X-ray fluorescence were found among the various scintillators studied. LSO scintillator was found to be the most attractive material for use in many X-ray imaging applications, exhibiting the best absorption properties in the largest part of the energy range studied. Y-based scintillators were also found to be of significant absorption performance within the low energy ranges.

Nikolopoulos, D.; Kandarakis, I.; Cavouras, D.; Valais, I.; Linardatos, D.; Michail, C.; David, S.; Gaitanis, A.; Nomicos, C.; Louizi, A.

2006-09-01

402

NASA Astrophysics Data System (ADS)

An electron-photon coupled Monte Carlo code ARCHER -

Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George

2014-06-01

403

NASA Astrophysics Data System (ADS)

We prove the limit theorem for life time distribution connected with reliability systems when their life time is a Pascal Convolution of independent and identically distributed random variables. We show that, in some conditions, such distributions may be approximated by means of Erlang distributions. As a consequnce, survival functions for such systems may be, respectively, approximated by Erlang survival functions. By using Monte Carlo method we experimantally confirm the theoretical results of our theorem.

Gheorghe, Munteanu Bogdan; Alexei, Leahu; Sergiu, Cataranciuc

2013-09-01

404

We develop a formalism and present an algorithm for optimization of the trial wave-function used in fixed-node diffusion quantum Monte Carlo (DMC) methods. The formalism is based on the DMC mixed estimator of the ground state probability density. We take advantage of a basic property of the walker configuration distribution generated in a DMC calculation, to (i) project-out a multi-determinant

F A Reboredo; R Q Hood; P C Kent

2009-01-01

405

We have described in part I of this work the theoretical basis of a quantum Monte Carlo method based on the use of a pure diffusion process and of the so-called full generalized Feynman–Kac (FGFK) formula. In this second part, we present a set of applications (one-dimensional oscillator, helium-like systems, hydrogen molecule) with the purpose of illustrating in a systematic

Michel Caffarel; Pierre Claverie

1988-01-01

406

ERIC Educational Resources Information Center

Monte Carlo methods are used to simulate activities in baseball such as a team's "hot streak" and a hitter's "batting slump." Student participation in such simulations is viewed as a useful method of giving pupils a better understanding of the probability concepts involved. (MP)

Houser, Larry L.

1981-01-01

407

A general method is presented for patient-specific 3-dimensional absorbed dose calculations based on quantitative SPECT activity measurements. Methods The computational scheme includes a method for registration of the CT image to the SPECT image and position-dependent compensation for attenuation, scatter, and collimator detector response performed as part of an iterative reconstruction method. A method for conversion of the measured activity distribution to a 3-dimensional absorbed dose distribution, based on the EGS4 (electron-gamma shower, version 4) Monte Carlo code, is also included. The accuracy of the activity quantification and the absorbed dose calculation is evaluated on the basis of realistic Monte Carlo–simulated SPECT data, using the SIMIND (simulation of imaging nuclear detectors) program and a voxel-based computer phantom. CT images are obtained from the computer phantom, and realistic patient movements are added relative to the SPECT image. The SPECT-based activity concentration and absorbed dose distributions are compared with the true ones. Results Correction could be made for object scatter, photon attenuation, and scatter penetration in the collimator. However, inaccuracies were imposed by the limited spatial resolution of the SPECT system, for which the collimator response correction did not fully compensate. Conclusion The presented method includes compensation for most parameters degrading the quantitative image information. The compensation methods are based on physical models and therefore are generally applicable to other radionuclides. The proposed evaluation methodology may be used as a basis for future intercomparison of different methods. PMID:12163637

Ljungberg, Michael; Sjogreen, Katarina; Liu, Xiaowei; Frey, Eric; Dewaraja, Yuni; Strand, Sven-Erik

2009-01-01

408

Numerical simulation of power varying with time detected by fission chamber was performed using the continuous- energy Monte Carlo code MCNP4B to comprehend time delay of neutron detection in power burst experiments arranged for systems incorporating water reflector as well as devoid of reflector in the Transient Experiment Critical Facility (TRACY). The simulation indicated that power generation in core during

Hiroshi YANAGISAWA; Akio OHNO

2002-01-01

409

probabilities from canonical Monte Carlo simulations of partial systems Ronald P. White and Hagai Meirovitcha . As in the preceding paper Paper I , a probability Pi approximating the Boltzmann probability of system configuration i, which are visited layer-by-layer, line-by-line. At each step a transition probability TP is calculated

Meirovitch, Hagai

410

We suggest an exact approach to help remedy the fermion sign problem in diffusion quantum Monte Carlo simulations. The approach is based on an explicit suppression of symmetric modes in the Schrödinger equation by means of a modified stochastic diffusion process (antisymmetric diffusion process). We introduce this algorithm and illustrate it on potential models in one dimension (1D) and show

Yuriy Mishchenko

2006-01-01

411

The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory, Monte Carlo, or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales, and timescales from those characteristic of diffusion to long-term microstructural evolution (~?s to years). Although the

Roger E Stoller; Stanislav I Golubov; C. S. Becquart

2007-01-01

412

NASA Astrophysics Data System (ADS)

Materials in a nuclear reactor are activated by neutron irradiation. When they are withdrawn from the reactor and placed in some storage, the potential dose received by workers in the surrounding area must be taken into account. In previous papers, activation of control rods in a NPP with BWR and dose rates around the storage pool have been estimated using the MCNP5 code based on the Monte Carlo method. Models were validated comparing simulation results with experimental measurements. As the activation is mostly produced in stainless steel components of control rods the activation model can be also validated by means of experimental measurements on a stainless steel sample after being irradiated in a reactor. This has been done in the Portuguese Research Reactor at Instituto Tecnológico e Nuclear. The neutron activation has been calculated by two different methods, Monte Carlo and CINDER'90, and results have been compared. After irradiation, dose rates at the water surface of the reactor pool were measured, with the irradiated stainless steel sample submerged at different positions under water. Experimental measurements have been compared with simulation results using Monte Carlo. The comparison shows a good agreement confirming the validation of models.

Lázaro, Ignacio; Ródenas, José; Marques, José G.; Gallardo, Sergio

2014-06-01

413

Monte Carlo simulations in SPET and PET

Monte Carlo methods are extensively used in Nuclear Medicine to tackle a variety of problems that are diffi- cult to study by an experimental or analytical approach. A review of the most recent tools allowing application of Monte Carlo methods in single photon emission tomography (SPET) and positron emission tomography (PET) is presented. To help potential Monte Carlo users choose

I. Buvat; I. Castiglioni

2002-01-01

414

Monte Carlo Go Using Previous Simulation Results

The researches on Go using Monte Carlo method are treated as hot topics in these years. In particular, Monte Carlo Tree Search algorithm such as UCT made great contributions to the development of computer Go. When Monte Carlo method was used for Go, the previous simulation results were not usually stored. In this paper, we suggest a new idea of

Takuma TOYODA; Yoshiyuki KOTANI

2010-01-01

415

Thermodynamic Scaling Gibbs Ensemble Monte Carlo

Thermodynamic Scaling Gibbs Ensemble Monte Carlo: A new method for determination of phase for correspondence. EÂmail:azp2@cornell.edu #12; We combine Valleau's thermodynamic scaling Monte Carlo concept Monte Carlo simulations. There has been significant recent progress in molecular simulation method

416

The process of low pressure organic vapor phase deposition (LP-OVPD) controls the growth of amorphous organic thin films, where the source gases (Alq3 molecule, etc.) are introduced into a hot wall reactor via an injection barrel using an inert carrier gas (N2 molecule). It is possible to control well the following substrate properties such as dopant concentration, deposition rate, and thickness uniformity of the thin film. In this paper, we present LP-OVPD simulation results using direct simulation Monte Carlo-Neutrals (Particle-PLUS neutral module) which is commercial software adopting direct simulation Monte Carlo method. By estimating properly the evaporation rate with experimental vaporization enthalpies, the calculated deposition rates on the substrate agree well with the experimental results that depend on carrier gas flow rate and source cell temperature. PMID:23674843

Wada, Takao; Ueda, Noriaki

2013-01-01

417

The process of low pressure organic vapor phase deposition (LP-OVPD) controls the growth of amorphous organic thin films, where the source gases (Alq3 molecule, etc.) are introduced into a hot wall reactor via an injection barrel using an inert carrier gas (N2 molecule). It is possible to control well the following substrate properties such as dopant concentration, deposition rate, and thickness uniformity of the thin film. In this paper, we present LP-OVPD simulation results using direct simulation Monte Carlo-Neutrals (Particle-PLUS neutral module) which is commercial software adopting direct simulation Monte Carlo method. By estimating properly the evaporation rate with experimental vaporization enthalpies, the calculated deposition rates on the substrate agree well with the experimental results that depend on carrier gas flow rate and source cell temperature. PMID:23674843

Wada, Takao; Ueda, Noriaki

2013-04-21

418

In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface. PMID:24526892

Wu, Yunzhao; Tang, Zesheng

2014-01-01

419

In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface. PMID:24526892

Wong, Un-Hong; Wu, Yunzhao; Wong, Hon-Cheng; Liang, Yanyan; Tang, Zesheng

2014-01-01

420

Monte Carlo data association for multiple target tracking Rickard Karlsson

Monte Carlo data association for multiple target tracking Rickard Karlsson Dept. of Electrical, these estimation methods may lead to nonÂoptimal solutions. The sequential Monte Carlo methods, or particle filters chose the number of particles. 2 Sequential Monte Carlo methods Monte Carlo techniques have been

Gustafsson, Fredrik

421

Monte Carlo data association for multiple target tracking Rickard Karlsson

Monte Carlo data association for multiple target tracking Rickard Karlsson Dept. of Electrical, these estimation methods may lead to non-optimal solutions. The sequential Monte Carlo methods, or particle filters chose the number of particles. 2 Sequential Monte Carlo methods Monte Carlo techniques have been

Gustafsson, Fredrik

422

NASA Astrophysics Data System (ADS)

The main assumptions of the statistical counting (SC) method [D. Zhao et al., J. Chem. Phys. 104, 1672 (1996)] for the calculation of the conformational entropy of a chain modeled on the lattice are presented. The method is discussed in terms of its applicability to different physical systems and the integrity of results. Also, an extension of the SC method for the analysis of the statistics of some Verdier-Stockmayer algorithms in the Metropolis Monte Carlo simulation is proposed. The results of the application of this new method, named here as the micomodification probabilities (MMP) method, for the study of the effect of different solvent conditions, different types of geometrical constraints and deforming external forces on the free energy of a polymer chain, are presented. The use of the MMP method for the investigation of a charged polymer in the presence of other charged objects (ions, nanoparticles) is also reported.

Nowicki, W.; Nowicka, G.; Ma?ka, A.

2014-10-01

423

Our objective is to develop a Diffusion Monte Carlo (DMC) algorithm to estimate the exact expectation values of non-differential operators, such as polarizabilities and high-order hyperpolarizabilities, for isolated atoms and molecules: < Phi_0|P_op| Phi0 >. The existing Ground State Distribution DMC (GSD DMC) algorithm which attempts this has a serious bias. On the other hand, the Pure DMC algorithm with

Ivana Bosa; Stuart M. Rothstein

2004-01-01

424

Monte Carlo neutrino oscillations

We demonstrate that the effects of matter upon neutrino propagation may be recast as the scattering of the initial neutrino wave function. Exchanging the differential, Schrodinger equation for an integral equation for the scattering matrix S permits a Monte Carlo method for the computation of S that removes many of the numerical difficulties associated with direct integration techniques.

Kneller, James P. [Department of Physics, North Carolina State University, Raleigh, North Carolina 27695-8202 (United States); School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455 (United States); McLaughlin, Gail C. [Department of Physics, North Carolina State University, Raleigh, North Carolina 27695-8202 (United States)

2006-03-01

425

Monte Carlo Neutrino Oscillations

We demonstrate that the effects of matter upon neutrino propagation may be recast as the scattering of the initial neutrino wavefunction. Exchanging the differential, Schrodinger equation for an integral equation for the scattering matrix S permits a Monte Carlo method for the computation of S that removes many of the numerical difficulties associated with direct integration techniques.

James P. Kneller; Gail C. McLaughlin

2005-09-29

426

MCNPX is a Fortran 90 Monte Carlo radiation transport computer code that transports all particles at all energies. It is a superset of MCNP4C3, and has many capabilities beyond MCNP4C3. These capabilities are summarized along with their quality guarantee and code availability. Then the user interface changes from MCNP are described. Finally, the n.ew capabilities of the latest version, MCNPX 2.5.c, are documented. Future plans and references are also provided.

Hendricks, J. S. (John S.)

2003-01-01

427

The monte carlo newton-raphson algorithm

It is shown that the Monte Carlo Newton-Raphson algorithm is a viable alternative to the Monte Carlo EM algorithm for finding maximum likelihood estimates based on incomplete data. Both Monte Carlo procedures require simulations from the conditional distribution of the missing data given the observed data with the aid of methods like Gibbs sampling and rejective sampling. The Newton-Raphson algorithm

Anthony Y. C. Kuk; Yuk W. Cheng

1997-01-01

428

Monte Carlo Integration Lecture 2 The Problem

Monte Carlo Integration Lecture 2 The Problem Let be a probability measure over the Borel -field X S and h(x) = 0 otherwise. #12;Monte Carlo Integration Lecture 2 When the problem appears to be intractable, Press et al (1992) and reference therein). For high dimensional problems, Monte Carlo methods have

Liang, Faming

429

We develop a formalism and present an algorithm for optimization of the trial\\u000awave-function used in fixed-node diffusion quantum Monte Carlo (DMC) methods.\\u000aWe take advantage of a basic property of the walker configuration distribution\\u000agenerated in a DMC calculation, to (i) project-out a multi-determinant\\u000aexpansion of the fixed-node ground-state wave function and (ii) to define a\\u000acost function that

Fernando A. Reboredo; Randolph Q. Hood; Paul R. C. Kent

2008-01-01

430

Monte-Carlo Exploration for Deterministic Planning

Search methods based on Monte-Carlo simulation have recently led to breakthrough performance im- provements in difficult game-playing domains such as Go and General Game Playing. Monte-Carlo Random Walk (MRW) planning applies Monte- Carlo ideas to deterministic classical planning. In the forward chaining planner ARVAND, Monte- Carlo random walks are used to explore the local neighborhood of a search state for

Hootan Nakhost; Martin Müller

2009-01-01

431

Monte Carlo analysis of neutron detection with a BaF2 scintillation detector

This work presents the results of investigations aimed at simulating the response of a barium fluoride (BaF2) detector to neutrons and photons. The simulations are performed with the MCNP-PoliMi code, a modification of MCNP-4C. The simulation results are compared to time-of-flight measurements performed with the nuclear materials identification system (NMIS). In particular, the neutron detection capabilities of the BaF2 scintillator

Sara A. Pozzi; John S. Neal; Richard B. Oberer; John T. Mihalczo

2004-01-01

432

NASA Astrophysics Data System (ADS)

The purpose of this study was to evaluate organ doses for individual patients undergoing interventional transcatheter arterial embolization (TAE) for hepatocellular carcinoma (HCC) using measurement-based Monte Carlo simulation and adaptive organ segmentation. Five patients were enrolled in this study after institutional ethical approval and informed consent. Gafchromic XR-RV3 films were used to measure entrance surface dose to reconstruct the nonuniform fluence distribution field as the input data in the Monte Carlo simulation. XR-RV3 films were used to measure entrance surface doses due to their lower energy dependence compared with that of XR-RV2 films. To calculate organ doses, each patient's three-dimensional dose distribution was incorporated into CT DICOM images with image segmentation using thresholding and k-means clustering. Organ doses for all patients were estimated. Our dose evaluation system not only evaluated entrance surface doses based on measurements, but also evaluated the 3D dose distribution within patients using simulations. When film measurements were unavailable, the peak skin dose (between 0.68 and 0.82 of a fraction of the cumulative dose) can be calculated from the cumulative dose obtained from TAE dose reports. Successful implementation of this dose evaluation system will aid radiologists and technologists in determining the actual dose distributions within patients undergoing TAE.

Tsai, Hui-Yu; Lin, Yung-Chieh; Tyan, Yeu-Sheng

2014-11-01

433

To rapidly derive a result for diffuse reflectance from a multilayered model that is equivalent to that of a Monte-Carlo simulation (MCS), we propose a combination of a layered white MCS and the adding-doubling method. For slabs with various scattering coefficients assuming a certain anisotropy factor and without absorption, we calculate the transition matrices for light flow with respect to the incident and exit angles. From this series of precalculated transition matrices, we can calculate the transition matrices for the multilayered model with the specific anisotropy factor. The relative errors of the results of this method compared to a conventional MCS were less than 1%. We successfully used this method to estimate the chromophore concentration from the reflectance spectrum of a numerical model of skin and in vivo human skin tissue.

Yoshida, Kenichiro; Nishidate, Izumi

2014-01-01

434

NASA Astrophysics Data System (ADS)

Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

Olivares, G.; Teferle, F. N.

2013-12-01

435

1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

T. EVANS; ET AL

2000-08-01

436

NASA Astrophysics Data System (ADS)

A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model of alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.

Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin

2014-08-01

437

A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model of alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation. PMID:25149770

Sepehri, Aliasghar; Loeffler, Troy D; Chen, Bin

2014-08-21

438

NASA Astrophysics Data System (ADS)

An improved approach to the simulation of the radiative heat exchange in a furnace is proposed. This approach is realized with the use of the computer program REFORM based on the well-known Monte Carlo ray tracing algorithms. The method proposed allows one to determine the factors of radiative heat exchange in furnaces for which the method of direct numerical integration is difficult to use due to their geometries. The indicated REFORM program was validated and the results obtained with it were compared with corresponding existing solutions. It has been established that the approach proposed makes it possible to more accurately represent the radiative heat exchange between a steel load and its surroundings.

Matthew, A. D.; Tan, C. K.; Roach, P. A.; Ward, J.; Broughton, J.; Heeley, A.

2014-05-01

439

NASA Astrophysics Data System (ADS)

Pile-oscillation experiments are performed in the MINERVE reactor at the CEA Cadarache to improve nuclear data accuracy. In order to precisely calculate small reactivity variations (<10 pcm) obtained in these experiments, a reference calculation need to be achieved. This calculation may be accomplished using the continuous-energy Monte Carlo code TRIPOLI-4® by using the eigenvalue difference method. This "direct" method has shown limitations in the evaluation of very small reactivity effects because it needs to reach a very small variance associated to the reactivity in both states. To answer this problem, it has been decided to implement the exact perturbation theory in TRIPOLI-4® and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4® is described. To illustrate the effciency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the "direct" estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the "direct" method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high precision. Other applications of this perturbation method are presented and tested like the calculation of exact kinetic parameters (?eff, ?eff) or sensitivity parameters.

Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.; Malvagi, F.

2014-06-01

440

Monte Carlo Go Has a Way to Go

Monte Carlo Go is a promising method to improve the perfor- mance of computer Go programs. This approach determines the next move to play based on many Monte Carlo samples. This paper examines the relative advantages of additional samples and enhancements for Monte Carlo Go. By par- allelizing Monte Carlo Go, we could increase sample sizes by two orders of

Haruhiro Yoshimoto; Kazuki Yoshizoe; Tomoyuki Kaneko; Akihiro Kishimoto; Kenjiro Taura

2006-01-01

441

Purpose: The quality of tomographic images is directly affected by the system model being used in image reconstruction. An accurate system matrix is desirable for high-resolution image reconstruction, but it often leads to high computation cost. In this work the authors present a maximum a posteriori reconstruction algorithm with residual correction to alleviate the tradeoff between the model accuracy and the computation efficiency in image reconstruction. Methods: Unlike conventional iterative methods that assume that the system matrix is accurate, the proposed method reconstructs an image with a simplified system matrix and then removes the reconstruction artifacts through residual correction. Since the time-consuming forward and back projection operations using the accurate system matrix are not required in every iteration, image reconstruction time can be greatly reduced. Results: The authors apply the new algorithm to high-resolution positron emission tomography reconstruction with an on-the-fly Monte Carlo (MC) based positron range model. Computer simulations show that the new method is an order of magnitude faster than the traditional MC-based method, whereas the visual quality and quantitative accuracy of the reconstructed images are much better than that obtained by using the simplified system matrix alone. Conclusions: The residual correction method can reconstruct high-resolution images and is computationally efficient. PMID:20229880

Fu, Lin; Qi, Jinyi

2010-01-01

442

Ada Numerica (1998), pp. 1-49 Cambridge University Press, 1998 Monte Carlo and quasi-Monte Carlo

Ada Numerica (1998), pp. 1-49 Â© Cambridge University Press, 1998 Monte Carlo and quasi-Monte Carlo-mail: caiflisch@math.ucla.edu Monte Carlo is one of the most versatile and widely used numerical methods. Its convergence rate, O(N~1 ^2 ), is independent of dimension, which shows Monte Carlo to be very robust but also

Li, Tiejun

443

Chapter 2 Monte Carlo Integration This chapter gives an introduction to Monte Carlo integration useful in computer graphics. Good references on Monte Carlo methods include Kalos & Whitlock [1986 for Monte Carlo applications to neutron transport problems; Lewis & Miller [1984] is a good source

Stanford University

444

The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis. PMID:24002801

Ng, C M

2013-10-01

445

We describe two Go programs, and , developed by a Monte-Carlo approach that is simpler than Bruegmann's (1993) approach. Our method is based on Abramson (1990). We performed experiments to assess ideas on (1) progressive pruning, (2) all moves as first heuristic, (3) temperature, (4) simu- lated annealing, and (5) depth-two tree search within the Monte-Carlo frame- work. Progressive pruning

Bruno Bouzy; Bernard Helmstetter

2003-01-01

446

The classical-trajectory Monte Carlo method has been used to study the capture of negative kaons by hydrogen and deuterium atoms; subsequently, the elastic scattering, Stark mixing, and Coulomb deexcitation cross sections of Kp and Kd atoms have been determined. The results for kaonic atom formation confirm the initial conditions that have been parametrically applied by most atomic cascade models. Our results show that Coulomb deexcitation in Kp and Kd atoms with {Delta}n>1 is important in addition to n=1. We have shown that the contribution of molecular structure effects to the cross sections of the collisional processes is larger than the isotopic effects of the targets. We have also compared our results with the semiclassical approaches.

Raeisi, G. M. [Department of Physics, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of); Department of Physics, Shahrekord University, Shahrekord 115 (Iran, Islamic Republic of); Kalantari, S. Z. [Department of Physics, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of)

2010-10-15

447

Ground-state properties of EtMe3Sb[Pd(dmit)2]2 by many-variable variational Monte Carlo method

NASA Astrophysics Data System (ADS)

The organic Mott insulator EtMe3Sb[Pd(dmit)2]2 is a strongly correlated electron system on a nearly-regular triangular lattice and regarded as a spin liquid material. We investigate its effective low-energy model derived from first principles calculations and band+dimensional downfolding. The ab initio effective Hamiltonian is given in the form of the two-dimensional single-band extended Hubbard model on an anisotropic triangular lattice with short-ranged Coulomb and exchange interactions. Its ground state is calculated by the many-variable variational Monte Carlo method with quantum-number projection and multi-variable optimization. We draw the ground-state phase diagram as a function of scaling parameters for the interactions and the geometrical frustration by extending the ab initio model. We discuss Mott transitions and magnetic properties.

Morita, Satoshi; Kaneko, Ryui; Imada, Masatoshi

2012-02-01

448

NASA Astrophysics Data System (ADS)

We present the multicomponent extension of the semistochastic quantum Monte Carlo (mc-SQMC) method for treating electron-nuclear correlation in the molecular Hamiltonian. All particles in the molecule are treated quantum mechanically and the variational solution is obtained with the SQMC method. The key feature of this approach is that the BO and separation-rotor approximation are not assumed. The application of the SQMC method for multicomponent systems involves many formidable challenges and this talk will focus on strategies to address these challenges including, appropriate coordinate system for the molecular Hamiltonian, separation of the center of mass kinetic energy, construction of the 1-particle basis functions for electrons and nuclei, construction of the multicomponent CI space and evaluation of connected configurations needed during propagation step in the SQMC method. Results from mc-SQMC will be presented for H2, He2, and H2O systems. The H2 system has been extensively studied using various methods, such as QMC and PIMC, making it an ideal system to test and compare the mc-SQMC implementation. The impact of the BO approximation and vibration-rotation coupling will be discussed by comparing mc-SQMC results with reported values for the weakly bound He2.

Ellis, Benjamin; Chakraborty, Arindam; Holmes, Adam; Changlani, Hitesh; Umrigar, Cyrus

2013-03-01

449

There is a compelling safeguards need to assay nondestructively fissile plutonium from fissile uranium in spent light water reactor fuel. Present methods suffer from a number of limitations and are incapable of providing accurate and independent safeguards assay information. The only feasible method capable of performing the required assay of spent fuel is the slowing down time (SDT) method. The

Naeem Mohamed Abdurrahman

1991-01-01