While these samples are representative of the content of Science.gov,

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of Science.gov

to obtain the most current and comprehensive results.

Last update: August 15, 2014.

1

NASA Astrophysics Data System (ADS)

Comparison of different Monte Carlo codes for understanding their limitations is essential to avoid systematic errors in the simulation, and to suggest further improvement for the codes. MCNP4C and EGSnrc, two Monte Carlo codes commonly used in medical physics, were compared and evaluated against electron depth-dose data and experimental results obtained using clinical radiotherapy beams. Different physical models and algorithms used in the codes give significantly different depth-dose curves. The default version of MCNP4C calculates electron depth-dose curves which are too much penetrating. The MCNP4C results agree better with the experiment if the Integrated Tiger Series-style energy-indexing algorithm is used. EGSnrc uses a class II-Condensed History (CH) scheme for the simulation of electron transport. To conclude the comparison, a timing study was performed. It was noted that EGSnrc is generally faster than MCNP4C and the use of a large number of scoring voxels dramatically slows down the MCNP4C calculation. However, the use of a large number of geometry voxels in MCNP4C only slightly affects the speed of the calculation.

Jabbari, N.; Hashemi-Malayeri, B.; Farajollahi, A. R.; Kazemnejad, A.; Shafaei, A.; Jabbari, S.

2007-08-01

2

The three-dimensional continuous energy Monte Carlo code MCNP4C was used to develop a versatile and accurate full-core model of the 3MW TRIGA MARK II research reactor at Atomic Energy Research Establishment, Savar, Dhaka, Bangladesh. The model represents in detail all components of the core with literally no physical approximation. All fresh fuel and control elements as well as the vicinity

M. Q. Huda

2006-01-01

3

Evaluation of gamma ray buildup factor data in water with MCNP4C code

The exposure buildup factors for gamma and X-ray photons in water are computed using the MCNP4C code. The results are obtained for the energy range 0.04–6MeV and penetration depths up to 10mfp. The results are compared with the published buildup factor data during 1960–2010. Both agreements and discrepancies are observed between our results and the data appearing in the literature.

Dariush Sardari; Sassan Saudi; Maryam Tajik

2011-01-01

4

Calculation of the store house worker dose in a lost wax foundry using MCNP-4C.

Lost wax casting is an industrial process which permits the transmutation into metal of models made in wax. The wax model is covered with a silicaceous shell of the required thickness and once this shell is built the set is heated and wax melted. Liquid metal is then cast into the shell replacing the wax. When the metal is cool, the shell is broken away in order to recover the metallic piece. In this process zircon sands are used for the preparation of the silicaceous shell. These sands have varying concentrations of natural radionuclides: 238U, 232Th and 235U together with their progenics. The zircon sand is distributed in bags of 50 kg, and 30 bags are on a pallet, weighing 1,500 kg. The pallets with the bags have dimensions 80 cm x 120 cm x 80 cm, and constitute the radiation source in this case. The only pathway of exposure to workers in the store house is external radiation. In this case there is no dust because the bags are closed and covered by plastic, the store house has a good ventilation rate and so radon accumulation is not possible. The workers do not touch with their hands the bags and consequently skin contamination will not take place. In this study all situations of external irradiation to the workers have been considered; transportation of the pallets from vehicle to store house, lifting the pallets to the shelf, resting of the stock on the shelf, getting down the pallets, and carrying the pallets to production area. Using MCNP-4C exposure situations have been simulated, considering that the source has a homogeneous composition, the minimum stock in the store house is constituted by 7 pallets, and the several distances between pallets and workers when they are at work. The photons flux obtained by MCNP-4C is multiplied by the conversion factor of Flux to Kerma for air by conversion factor to Effective Dose by Kerma unit, and by the number of emitted photons. Those conversion factors are obtained of ICRP 74 table 1 and table 17 respectively. This is the way to obtain a function giving dose rate around the source. PMID:16604600

Alegría, Natalia; Legarda, Fernando; Herranz, Margarita; Idoeta, Raquel

2005-01-01

5

Comparative Dosimetric Estimates of a 25 KeV Electron Microbeam with Three Monte Carlo Codes.

National Technical Information Service (NTIS)

The calculations presented compare the different performances of the three Monte Carlo codes: PENetration and Energy LOss of Positrons and Electrons code (PENELOPE-1999), Monte Carlo N-Particle transport code system (MCNP-4C), Positive Ion Track Structure...

E. Mainardi R. J. Donahue E. A. Blakely

2002-01-01

6

A contribution Monte Carlo method

A Contribution Monte Carlo method is developed and successfully applied to a sample deep-penetration shielding problem. The random walk is simulated in most of its parts as in conventional Monte Carlo methods. The probability density functions (pdf's) are expressed in terms of spherical harmonics and are continuous functions in direction cosine and azimuthal angle variables as well as in position coordinates; the energy is discretized in the multigroup approximation. The transport pdf is an unusual exponential kernel strongly dependent on the incident and emergent directions and energies and on the position of the collision site. The method produces the same results obtained with the deterministic method with a very small standard deviation, with as little as 1,000 Contribution particles in both analog and nonabsorption biasing modes and with only a few minutes CPU time.

Aboughantous, C.H. (Louisiana State Univ., Baton Rouge, LA (United States). Nuclear Science Center)

1994-11-01

7

Shell model Monte Carlo methods

We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

Koonin, S.E. [California Inst. of Tech., Pasadena, CA (United States). W.K. Kellogg Radiation Lab.; Dean, D.J. [Oak Ridge National Lab., TN (United States)

1996-10-01

8

Semistochastic projector Monte Carlo method.

We introduce a semistochastic implementation of the power method to compute, for very large matrices, the dominant eigenvalue and expectation values involving the corresponding eigenvector. The method is semistochastic in that the matrix multiplication is partially implemented numerically exactly and partially stochastically with respect to expectation values only. Compared to a fully stochastic method, the semistochastic approach significantly reduces the computational time required to obtain the eigenvalue to a specified statistical uncertainty. This is demonstrated by the application of the semistochastic quantum Monte Carlo method to systems with a sign problem: the fermion Hubbard model and the carbon dimer. PMID:23368167

Petruzielo, F R; Holmes, A A; Changlani, Hitesh J; Nightingale, M P; Umrigar, C J

2012-12-01

9

Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

Zimmerman, G.B.

1997-06-24

10

Optimizing efficiency of perturbative Monte Carlo method

We introduce error weighting functions into the perturbative Monte Carlo method for use with a hybrid ab initio quantum . mechanicsrmolecular mechanics QMrMM potential. The perturbative Monte Carlo approach introduced earlier provides a means to reduce the number of full SCF calculations in simulations using a QMrMM potential by evoking perturbation theory to calculate energy changes due to displacements of

Tom J. Evans; Thanh N. Truong

1998-01-01

11

Markov Chain Monte Carlo Linkage Analysis Methods

As alluded to in the chapter “Linkage Analysis of Qualitative Traits”, neither the Elston–Steward algorithm nor the Lander–Green\\u000a approach is amenable to genetic data from large complex pedigrees and a large number of markers. In such cases, Monte Carlo\\u000a estimation methods provide a viable alternative to the exact solutions. Two types of Monte Carlo methods have been developed\\u000a for linkage

Robert P. Igo; Yuqun Luo; Shili Lin

12

Monte Carlo method in computer holography

A method based on the Monte Carlo procedure is suggested to simulate the reconstruction of non-Fourier-type computer- generated holograms (CGHs). The cases of amplitude holograms (CGAHs) and phase holograms (CGPHs), or `kinoform lenses,' are investigated. A method to model the finite pixel size of the hologram is suggested. An importance sampling method is proposed to simulate the reconstruction of CGAHs.

Nandor Bokor; Zsolt Papp

1997-01-01

13

Improved Monte Carlo Renormalization Group Method

An extensive program to analyse critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated. 9 refs.

Gupta, R.; Wilson, K.G.; Umrigar, C.

1985-01-01

14

An Introduction to Monte Carlo Methods

ERIC Educational Resources Information Center

Reviews the principles of Monte Carlo calculation and random number generation in an attempt to introduce the direct and the rejection method of sampling techniques as well as the variance-reduction procedures. Indicates that the increasing availability of computers makes it possible for a wider audience to learn about these powerful methods. (CC)

Raeside, D. E.

1974-01-01

15

The effect of the detector characteristics on the performance of an isotopic neutron source device for measuring thermal neutron absorption cross section (?) has been examined by means of Monte Carlo simulations. Three specific experimental arrangements, alternately with BF3 counters and 3He counters of the same sizes, have been modelled using the MCNP-4C code. Results of Monte Carlo calculations show

A BOLEWSKIJR; M. Ciechanowski; A. Dydejczyk; A. Kreft

2008-01-01

16

Benchmark analysis of the TRIGA MARK II research reactor using Monte Carlo techniques

This study deals with the neutronic analysis of the current core configuration of a 3-MW TRIGA MARK II research reactor at Atomic Energy Research Establishment (AERE), Savar, Dhaka, Bangladesh and validation of the results by benchmarking with the experimental, operational and available Final Safety Analysis Report (FSAR) values. The 3-D continuous-energy Monte Carlo code MCNP4C was used to develop a

M. Q. Huda; M. Rahman; M. M. Sarker; S. I. Bhuiyan

2004-01-01

17

Density-matrix quantum Monte Carlo method

NASA Astrophysics Data System (ADS)

We present a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system at finite temperature. This allows arbitrary reduced density matrix elements and expectation values of complicated nonlocal observables to be evaluated easily. The method resembles full configuration interaction quantum Monte Carlo but works in the space of many-particle operators instead of the space of many-particle wave functions. One simulation provides the density matrix at all temperatures simultaneously, from T =? to T =0, allowing the temperature dependence of expectation values to be studied. The direct sampling of the density matrix also allows the calculation of some previously inaccessible entanglement measures. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices, the concurrence of one-dimensional spin rings, and the Renyi S2 entanglement entropy of various sublattices of the 6×6 Heisenberg model are calculated. The nature of the sign problem in the method is also investigated.

Blunt, N. S.; Rogers, T. W.; Spencer, J. S.; Foulkes, W. M. C.

2014-06-01

18

Discrete range clustering using Monte Carlo methods

NASA Technical Reports Server (NTRS)

For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.

Chatterji, G. B.; Sridhar, B.

1993-01-01

19

Calculating Pi Using the Monte Carlo Method

NASA Astrophysics Data System (ADS)

During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo.1 Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.

Williamson, Timothy

2013-11-01

20

Methods for Monte Carlo simulations of biomacromolecules

The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.

Vitalis, Andreas; Pappu, Rohit V.

2010-01-01

21

A Local Superbasin Kinetic Monte Carlo Method

NASA Astrophysics Data System (ADS)

A ubiquitous problem in atomic-scale simulation of materials is the small-barrier problem, in which the free-energy landscape presents ``superbasins'' with low intra-basin energy barriers relative to the inter-basin barriers. Rare-event simulation methods, such as kinetic Monte Carlo (KMC) and accelerated molecular dynamics, are inefficient for such systems because considerable effort is spent simulating short-time, intra-basin motion without evolving the system significantly. We developed an adaptive local-superbasin KMC algorithm (LSKMC) for treating fast, intra-basin motion using a Master-equation / Markov-chain approach and long-time evolution using KMC. Our algorithm is designed to identify local superbasins in an on-the-fly search during conventional KMC, construct the rate matrix, compute the mean exit time and its distribution, obtain the probability to exit to each of the superbasin border (absorbing) states, and integrate superbasin exits with non-superbasin moves. We demonstrate various aspects of the method in several examples, which also highlight the efficiency of the method.

Fichthorn, Kristen; Lin, Yangzheng

2013-03-01

22

INTRODUCTION TO THE KINETIC MONTE CARLO METHOD

Monte Carlo refers to a broad class of algorithms that solve problems through the use of random numbers. They .rst emerged\\u000a in the late 1940’s and 1950’s as electronic computers came into use [1], and the name means just what it sounds like, whimsically\\u000a referring to the random nature of the gambling at Monte Carlo, Monaco. The most famous of

Arthur F. Voter

23

This work presents an extensive study on Monte Carlo radiation transport simulation and thermoluminescent (TL) dosimetry for characterising mixed radiation fields (neutrons and photons) occurring in nuclear reactors. The feasibility of these methods is investigated for radiation fields at various locations of the Portuguese Research Reactor (RPI). The performance of the approaches developed in this work is compared with dosimetric techniques already existing at RPI. The Monte Carlo MCNP-4C code was used for a detailed modelling of the reactor core, the fast neutron beam and the thermal column of RPI. Simulations using these models allow to reproduce the energy and spatial distributions of the neutron field very well (agreement better than 80%). In the case of the photon field, the agreement improves with decreasing intensity of the component related to fission and activation products. (7)LiF:Mg,Ti, (7)LiF:Mg,Cu,P and Al(2)O(3):Mg,Y TL detectors (TLDs) with low neutron sensitivity are able to determine photon dose and dose profiles with high spatial resolution. On the other hand, (nat)LiF:Mg,Ti TLDs with increased neutron sensitivity show a remarkable loss of sensitivity and a high supralinearity in high-intensity fields hampering their application at nuclear reactors. PMID:16702246

Fernandes, A C; Gonçalves, I C; Santos, J; Cardoso, J; Santos, L; Ferro Carvalho, A; Marques, J G; Kling, A; Ramalho, A J G; Osvay, M

2006-01-01

24

Cool walking: A new Markov chain Monte Carlo sampling method

Effective relaxation processes for difficult systems like proteins or spin glasses require special simulation techniques that permit barrier crossing to ensure ergodic sampling. Numerous adaptations of the venerable Metropolis Monte Carlo (MMC) algorithm have been proposed to improve its sampling efficiency, including various hybrid Monte Carlo (HMC) schemes, and methods designed specifically for overcoming quasi-ergodicity problems such as Jump Walking

Scott Brown; Teresa Head-gordon

2003-01-01

25

A macro Monte Carlo method for electron beam dose calculations

The macro Monte Carlo (MMC) method is a fast Monte Carlo (MC) algorithm for high energy electron transport in an absorbing medium. Incident electrons are transported in large-scale macroscopic steps through the absorber. Electron parameters after each step are calculated from probability distributions. Transport of secondary electrons and bremsstrahlung photons is taken into account. For electron beam dose calculations, the

H. Neuenschwander; E. J. Born

1992-01-01

26

A Particle Population Control Method for Dynamic Monte Carlo

NASA Astrophysics Data System (ADS)

A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems.

Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony

2014-06-01

27

Path-Integral Monte Carlo Methods for Ultrasmall Device Modeling

Monte Carlo methods based on the Feynman path -integral (FPI) formulation of quantum mechanics are developed for modeling ultrasmall device structures. A brief introduction to pertinent aspects of the FPI formalism is given. A practical \\

Leonard Franklin Register II

1990-01-01

28

Vectorized Monte Carlo Methods for Reactor Lattice Analysis.

National Technical Information Service (NTIS)

This report details some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-energy Monte Carlo code for use on the CYBER-205 computer. While the principal applicatio...

F. B. Brown

1982-01-01

29

Monte Carlo methods and applications in nuclear physics

Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

Carlson, J.

1990-01-01

30

Monte Carlo Methods in Statistical Mechanics: Foundations and New Algorithms

IntroductionThe goal of these lectures is to give an introduction to current research on MonteCarlo methods in statistical mechanics and quantum field theory, with an emphasis on:1) the conceptual foundations of the method, including the possible dangers andmisuses, and the correct use of statistical error analysis; and2) new Monte Carlo algorithms for problems in critical phenomena and quantumfield theory, aimed

Alan D. Sokal

1996-01-01

31

Perturbation Monte Carlo methods for tissue structure alterations

This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ?15–25% of the scattering parameters.

Nguyen, Jennifer; Hayakawa, Carole K.; Mourant, Judith R.; Spanier, Jerome

2013-01-01

32

Perturbation Monte Carlo methods for tissue structure alterations.

This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ?15-25% of the scattering parameters. PMID:24156056

Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome

2013-01-01

33

Successful combination of the stochastic linearization and Monte Carlo methods

NASA Technical Reports Server (NTRS)

A combination of a stochastic linearization and Monte Carlo techniques is presented for the first time in literature. A system with separable nonlinear damping and nonlinear restoring force is considered. The proposed combination of the energy-wise linearization with the Monte Carlo method yields an error under 5 percent, which corresponds to the error reduction associated with the conventional stochastic linearization by a factor of 4.6.

Elishakoff, I.; Colombi, P.

1993-01-01

34

Improving the rejection sampling method in quasi-Monte Carlo methods

The rejection sampling method is one of the most popular methods used in Monte Carlo methods. In this paper, we investigate and improve the performance of using a deterministic version of rejection method in quasi-Monte Carlo methods. It turns out that the “quality” of the point set generated by deterministic rejection method is closely related to the problem of quasi-Monte

Xiaoqun Wang

2000-01-01

35

The Kinetic Monte Carlo method: Foundation, implementation, and application

The Kinetic Monte Carlo method provides a simple yet powerful and flexible tool for exercising the concerted action of fundamental, stochastic, physical mechanisms to create a model of the phenomena that they produce. This manuscript contains an overview of the theory behind the method, some simple examples to illustrate its implementation, and a technologically relevant application of the method to

Corbett C. Battaile

2008-01-01

36

Optimization of Kinoform Lenses with the Monte Carlo Method

For the optimization of non-Fourier-type computer-generated phase holograms (kinoform lenses), a method based on the Monte Carlo procedure is suggested. This method can be regarded as analogous to the iterative Fourier transform algorithm method that is widely used for the optimization of Fourier-type computer-generated phase holograms (kinoforms).

Nandor Bokor; Zsolt Papp

1998-01-01

37

A Multivariate Time Series Method for Monte Carlo Reactor Analysis

A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor.

Taro Ueki

2008-08-14

38

Monte Carlo Methods for Neutrino Transport in Core Collapse Supernovae

NASA Astrophysics Data System (ADS)

Core-collapse supernovae are among the most powerful events in Nature. Despite decades of effort, the details of the explosion mechanism remain uncertain. Recent studies indicate that the neutrino-driven explosion mechanism is a fundamentally three-dimensional phenomenon, implying that it is necessary to model such an event in three dimensions using large parallel supercomputers. Monte Carlo methods for radiation transport have been known for their simplicity and ease of parallel implementation. In this talk, I will present results of our explorations of Monte Carlo methods for neutrino transport in core-collapse supernovae.

Abdikamalov, Ernazar; Burrows, Adam; Loeffler, Frank; Ott, Christian D.; Schnetter, E.; Diener, Peter

2011-04-01

39

A new method to assess Monte Carlo convergence

The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1{radical}N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and {integral}{sub {minus}{infinity}}{sup {infinity}} f(x) dx = 1. Since f(x) is seldom known explicitly, Monte Carlo particle random walks sample f(x) implicitly. Unless there is a largest possible history score, the empirical f(x) must eventually decrease more steeply than l/x{sup 3} for the second moment ({integral}{sub {minus}{infinity}}{sup {infinity}} x{sup 2}f(x) dx) to exist.

Forster, R.A.; Booth, T.E.; Pederson, S.P.

1993-05-01

40

The alias method: A fast, efficient Monte Carlo sampling technique

The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 2 figs., 1 tab.

Rathkopf, J.A.; Edwards, A.L. (Lawrence Livermore National Lab., CA (USA)); Smidt, R.K. (California Polytechnic State Univ., San Luis Obispo, CA (USA))

1990-11-16

41

Monte Carlo simulation of the Greek Research Reactor neutron irradiation facilities

NASA Astrophysics Data System (ADS)

A Monte Carlo simulation of the Greek Research Reactor was carried out using MCNP-4C2 code and continuous energy cross-section data from ENDF/B-VI library. A detailed model of the reactor core was employed including standard and control fuel assemblies, reflectors and irradiation devices. The model predicted neutron flux distributions within the core in good agreement with calculations performed using the deterministic code CITATION and measurements using activation foils. The model is used for the prediction of the neutron field characteristics at the reactor irradiation devices and enables the design and evaluation of experiments involving material irradiations.

Stamatelatos, I. E.; Varvayanni, M.; Tzika, F.; Ale, A. B. F.; Catsaros, N.

2007-10-01

42

Calculating coherent pair production with Monte Carlo methods

We discuss calculations of the coherent electromagnetic pair production in ultra-relativistic hadron collisions. This type of production, in lowest order, is obtained from three diagrams which contain two virtual photons. We discuss simple Monte Carlo methods for evaluating these classes of diagrams without recourse to involved algebraic reduction schemes. 19 refs., 11 figs.

Bottcher, C.; Strayer, M.R.

1989-01-01

43

On Monte Carlo Methods and Applications in Geoscience

Monte Carlo methods are designed to study various deterministic problems using probabilistic approaches, and with computer simulations to explore much wider possibilities for the different algorithms. Pseudo- Random Number Generators (PRNGs) are based on linear congruences of some large prime numbers, while Quasi-Random Number Generators (QRNGs) provide low discrepancy sequences, both of which giving uniformly distributed numbers in (0,1). Chaotic

Zhan Zhang; J. Blais

2009-01-01

44

Kinetic Monte Carlo method for dislocation glide in silicon

A kinetic Monte Carlo (KMC) approach to the mesoscale simulation of dislocation glide via the kink mechanism is developed. In this paper we present the details of the KMC methodology, highlighting three features: (1) inclusion of dislocation dissociation; (2) efficient method of sampling the double-kink nucleation process; and (3) exact calculation of dislocation segment interactions.

V ASILY V. BULATOV

1999-01-01

45

A separable shadow Hamiltonian hybrid Monte Carlo method.

Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc). PMID:19894997

Sweet, Christopher R; Hampton, Scott S; Skeel, Robert D; Izaguirre, Jesús A

2009-11-01

46

A separable shadow Hamiltonian hybrid Monte Carlo method

NASA Astrophysics Data System (ADS)

Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).

Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A.

2009-11-01

47

A multilayer Monte Carlo method with free phase function choice

NASA Astrophysics Data System (ADS)

This paper presents an adaptation of the widely accepted Monte Carlo method for Multi-layered media (MCML). Its original Henyey-Greenstein phase function is an interesting approach for describing how light scattering inside biological tissues occurs. It has the important advantage of generating deflection angles in an efficient - and therefore computationally fast- manner. However, in order to allow the fast generation of the phase function, the MCML code generates a distribution for the cosine of the deflection angle instead of generating a distribution for the deflection angle, causing a bias in the phase function. Moreover, other, more elaborate phase functions are not available in the MCML code. To overcome these limitations of MCML, it was adapted to allow the use of any discretized phase function. An additional tool allows generating a numerical approximation for the phase function for every layer. This could either be a discretized version of (1) the Henyey-Greenstein phase function, (2) a modified Henyey-Greenstein phase function or (3) a phase function generated from the Mie theory. These discretized phase functions are then stored in a look-up table, which can be used by the adapted Monte Carlo code. The Monte Carlo code with flexible phase function choice (fpf-MC) was compared and validated with the original MCML code. The novelty of the developed program is the generation of a user-friendly algorithm, which allows several types of phase functions to be generated and applied into a Monte Carlo method, without compromising the computational performance.

Watté, R.; Aernouts, B.; Saeys, W.

2012-05-01

48

A new method to assess Monte Carlo convergence

The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1[radical]N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and [integral][sub [minus][infinity

Forster, R.A.; Booth, T.E.; Pederson, S.P.

1993-01-01

49

Adjoint Monte Carlo methods for radiotherapy treatment planning

For the past two decades, clinical procedures using highly collimated radiation beams, especially photons, have been used routinely. The main idea in radiation cancer therapy has been to maximize the dose in each point of the tumor without affecting the surrounding healthy tissue and especially the vital organs like the spine and the liver, using individually nonlethal beams that intersect at the tumor. Currently, the selection of the best set of beams (or fields) for a particular patient is determined by an iterative procedure that includes in each step a three-dimensional dose calculation for each beam configuration. The geometry is defined on the information obtained from the patient`s computed tomography or magnetic resonance imaging images. Current clinical dose calculation codes generally rely on semiempirical methods that are fast and work well for geometrically simple problems but are less accurate for practical, geometrically complex problems. The best-known method that can cope with that kind of physical and geometric complexity is the Monte Carlo method. However, to solve dose calculation problems with reasonable statistical errors in individual voxels, the needed computation time is excessively large. As a result, Monte Carlo codes are not routinely used for clinical treatment planning. In this paper, we sketch a new approach for the three-dimensional dose computations designed for radiotherapy treatment planning based on the adjoint Monte Carlo method. The proposed approach is more accurate than empirical techniques and has the potential to be faster than current methods.

Difilippo, F.C.; Goldstein, M.; Worley, B.A.; Ryman, J.C. [Oak Ridge National Laboratory, TN (United States)

1996-12-31

50

Uncertainties in external dosimetry: analytical vs. Monte Carlo method.

Over the years, the International Commission on Radiological Protection (ICRP) and other organisations have formulated recommendations regarding uncertainty in occupational dosimetry. The most practical and widely accepted recommendations are the trumpet curves. To check whether routine dosemeters comply with them, a Technical Report on uncertainties issued by the International Electrotechnical Commission (IEC) can be used. In this report, the analytical method is applied to assess the uncertainty of a dosemeter fulfilling an IEC standard. On the other hand, the Monte Carlo method can be used to assess the uncertainty. In this work, a direct comparison of the analytical and the Monte Carlo methods is performed using the same input data. It turns out that the analytical method generally overestimates the uncertainty by about 10-30 %. Therefore, the results often do not comply with the recommendations of the ICRP regarding uncertainty. The results of the more realistic uncertainty evaluation using the Monte Carlo method usually comply with the recommendations of the ICRP. This is confirmed by results seen in regular tests in Germany. PMID:19942627

Behrens, R

2010-03-01

51

This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 (125)I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published (125)I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh(-1)?U(-1) (±1.73%) and 0.965 cGyh(-1)?U(-1) (±1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within ±4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model. PMID:22894389

Mosleh-Shirazi, M A; Hadad, K; Faghihi, R; Baradaran-Ghahfarokhi, M; Naghshnezhad, Z; Meigooni, A S

2012-08-01

52

Simulation of quantum systems by the tomography Monte Carlo method

A new method of statistical simulation of quantum systems is presented which is based on the generation of data by the Monte Carlo method and their purposeful tomography with the energy minimisation. The numerical solution of the problem is based on the optimisation of the target functional providing a compromise between the maximisation of the statistical likelihood function and the energy minimisation. The method does not involve complicated and ill-posed multidimensional computational procedures and can be used to calculate the wave functions and energies of the ground and excited stationary sates of complex quantum systems. The applications of the method are illustrated. (fifth seminar in memory of d.n. klyshko)

Bogdanov, Yu I [Institute of Physics and Technology, Russian Academy of Sciences, Moscow (Russian Federation)

2007-12-31

53

Monte Carlo simulations of fermion systems: the determinant method

Described are the details for performing Monte Carlo simulations on systems of fermions at finite temperatures by use of a technique called the Determinant Method. This method is based on a functional integral formulation of the fermion problem (Blankenbecler et al., Phys. Rev D 24, 2278 (1981)) in which the quartic fermion-fermion interactions that exist for certain models are transformed into bilinear ones by the introduction (J. Hirsch, Phys. Rev. B 28, 4059 (1983)) of Ising-like variables and an additional finite dimension. It is on the transformed problem the Monte Carlo simulations are performed. A brief summary of research on two such model problems, the spinless fermion lattice gas and the Anderson impurity problem, is also given.

Gubernatis, J.E.

1985-01-01

54

Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems

NASA Astrophysics Data System (ADS)

This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It was found that for the current implementation of MCSA, both weak and strong scaling improved on that observed for production implementations of Krylov methods.

Slattery, Stuart R.

55

Monte Carlo Method and Transport in Plant Canopies Equation

Plant canopy reflectance is calculated usinlg the got~ernin~ equation f)r photon transport. The inte- gral equation of tran.sfer is solved by the Monte Carlo method. The main emphasis is on statistical estimation and simulation of the Marcov chain. The leaf dimensions are taken into account in obtaining the hot-spot effect of the canopy. Finally, nmnerieal results fi~r transport equation obtained

Victor S. Antyufeev; Alexander L. Marshak

56

MonteCarlo method forspinmodelswith long-range interactions

We introduce a Monte Carlo method for the simulation of spin models with ferromagnetic long-range interactions in which the amount of time per spin-flip operation is independent of the system size, in spite of the fact that the interactions between each spin and all other spins are taken into account. We work out two algorithms for the q-state Potts model

Erik Luijtenand; Henk W. J. Blote

57

Monte Carlo Methods and Applications for the Nuclear Shell Model

The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sd-pf-shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems.

Dean, D.J.; White, J.A.

1998-08-10

58

Kinetic Monte Carlo method for dislocation glide in silicon

A kinetic Monte Carlo (KMC) approach to the mesoscale simulation of dislocation glide via the kink mechanism is developed.\\u000a In this paper we present the details of the KMC methodology, highlighting three features: (1) inclusion of dislocation dissociation;\\u000a (2) efficient method of sampling the double-kink nucleation process; and (3) exact calculation of dislocation segment interactions.

Wei Cai; Vasily V. Bulatov; Sidney Yip

1999-01-01

59

Distributional monte carlo methods for the boltzmann equation

NASA Astrophysics Data System (ADS)

Stochastic particle methods (SPMs) for the Boltzmann equation, such as the Direct Simulation Monte Carlo (DSMC) technique, have gained popularity for the prediction of flows in which the assumptions behind the continuum equations of fluid mechanics break down; however, there are still a number of issues that make SPMs computationally challenging for practical use. In traditional SPMs, simulated particles may possess only a single velocity vector, even though they may represent an extremely large collection of actual particles. This limits the method to converge only in law to the Boltzmann solution. This document details the development of new SPMs that allow the velocity of each simulated particle to be distributed. This approach has been termed Distributional Monte Carlo (DMC). A technique is described which applies kernel density estimation to Nanbu's DSMC algorithm. It is then proven that the method converges not just in law, but also in solution for Linfinity(R 3) solutions of the space homogeneous Boltzmann equation. This provides for direct evaluation of the velocity density function. The derivation of a general Distributional Monte Carlo method is given which treats collision interactions between simulated particles as a relaxation problem. The framework is proven to converge in law to the solution of the space homogeneous Boltzmann equation, as well as in solution for Linfinity(R3) solutions. An approach based on the BGK simplification is presented which computes collision outcomes deterministically. Each technique is applied to the well-studied Bobylev-Krook-Wu solution as a numerical test case. Accuracy and variance of the solutions are examined as functions of various simulation parameters. Significantly improved accuracy and reduced variance are observed in the normalized moments for the Distributional Monte Carlo technique employing discrete BGK collision modeling.

Schrock, Christopher R.

60

Comparison of deterministic and Monte Carlo methods in shielding design.

In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. PMID:16381723

Oliveira, A D; Oliveira, C

2005-01-01

61

Variance Reduction Methods Applied to Deep-Penetration Monte Carlo Problems.

National Technical Information Service (NTIS)

A review of standard variance reduction methods for deep-penetration Monte Carlo calculations is presented. Comparisons and contrasts are made with methods for nonpenetration and reactor core problems. Difficulties and limitations of the Monte Carlo metho...

S. N. Cramer J. S. Tang

1986-01-01

62

Solving the many body pairing problem through Monte Carlo methods

NASA Astrophysics Data System (ADS)

Nuclear superconductivity is a central part of quantum many-body dynamics. In mesoscopic systems such as atomic nuclei, this phenomenon is influenced by shell effects, mean-field deformation, particle decay, and by other collective and chaotic components of nucleon motion. The ability to find an exact solution to these pairing correlations is of particular importance. In this presentation we develop and investigate the effectiveness of different methods of attacking the nucleon pairing problem in nuclei. In particular, we concentrate on the Monte Carlo approach. We review the configuration space Monte Carlo techniques, the Suzuki-Trotter breakup of the time evolution operator, and treatment of the pairing problem with non-constant matrix elements. The quasi-spin symmetry allows for a mapping of the pairing problem onto a problem of interacting spins which in turn can be solved using a Monte Carlo approach. The algorithms are investigated for convergence to the true ground state of model systems and calculated ground state energies are compared to those found by an exact diagonalization method. The possibility to include other non-pairing interaction components of the Hamiltonian is also investigated.

Lingle, Mark; Volya, Alexander

2012-03-01

63

Stabilized multilevel Monte Carlo method for stiff stochastic differential equations

A multilevel Monte Carlo (MLMC) method for mean square stable stochastic differential equations with multiple scales is proposed. For such problems, that we call stiff, the performance of MLMC methods based on classical explicit methods deteriorates because of the time step restriction to resolve the fastest scales that prevents to exploit all the levels of the MLMC approach. We show that by switching to explicit stabilized stochastic methods and balancing the stabilization procedure simultaneously with the hierarchical sampling strategy of MLMC methods, the computational cost for stiff systems is significantly reduced, while keeping the computational algorithm fully explicit and easy to implement. Numerical experiments on linear and nonlinear stochastic differential equations and on a stochastic partial differential equation illustrate the performance of the stabilized MLMC method and corroborate our theoretical findings.

Abdulle, Assyr, E-mail: assyr.abdulle@epfl.ch; Blumenthal, Adrian, E-mail: adrian.blumenthal@epfl.ch

2013-10-15

64

Variational Monte Carlo method for electron-phonon coupled systems

NASA Astrophysics Data System (ADS)

We develop a variational Monte Carlo (VMC) method for electron-phonon coupled systems. The VMC method has been extensively used for investigating strongly correlated electrons over the last decades. However, its applications to electron-phonon coupled systems have been severely restricted because of its large Hilbert space. Here, we propose a variational wave function with a large number of variational parameters, which is suitable and tractable for systems with electron-phonon coupling. In the proposed wave function, we implement an unexplored electron-phonon correlation factor, which takes into account the effect of the entanglement between electrons and phonons. The method is applied to systems with diagonal electron-phonon interactions, i.e., interactions between charge densities and lattice displacements (phonons). As benchmarks, we compare VMC results with previous results obtained by the exact diagonalization, the Green function Monte Carlo method and the density matrix renormalization group for the Holstein and Holstein-Hubbard model. From these benchmarks, we show that the present method offers an efficient way to treat strongly coupled electron-phonon systems.

Ohgoe, Takahiro; Imada, Masatoshi

2014-05-01

65

Improved criticality convergence via a modified Monte Carlo iteration method

Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.

Booth, Thomas E [Los Alamos National Laboratory; Gubernatis, James E [Los Alamos National Laboratory

2009-01-01

66

Condensed history Monte Carlo methods for photon transport problems

We study methods for accelerating Monte Carlo simulations that retain most of the accuracy of conventional Monte Carlo algorithms. These methods – called Condensed History (CH) methods – have been very successfully used to model the transport of ionizing radiation in turbid systems. Our primary objective is to determine whether or not such methods might apply equally well to the transport of photons in biological tissue. In an attempt to unify the derivations, we invoke results obtained first by Lewis, Goudsmit and Saunderson and later improved by Larsen and Tolar. We outline how two of the most promising of the CH models – one based on satisfying certain similarity relations and the second making use of a scattering phase function that permits only discrete directional changes – can be developed using these approaches. The main idea is to exploit the connection between the space-angle moments of the radiance and the angular moments of the scattering phase function. We compare the results obtained when the two CH models studied are used to simulate an idealized tissue transport problem. The numerical results support our findings based on the theoretical derivations and suggest that CH models should play a useful role in modeling light-tissue interactions.

Bhan, Katherine; Spanier, Jerome

2007-01-01

67

Analysis of real-time networks with monte carlo methods

NASA Astrophysics Data System (ADS)

Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.

Mauclair, C.; Durrieu, G.

2013-12-01

68

ITER Neutronics Modeling Using Hybrid Monte Carlo/Deterministic and CAD-Based Monte Carlo Methods

The immense size and complex geometry of the ITER experimental fusion reactor require the development of special techniques that can accurately and efficiently perform neutronics simulations with minimal human effort. This paper shows the effect of the hybrid Monte Carlo (MC)/deterministic techniques - Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) - in enhancing the efficiency of the neutronics modeling of ITER and demonstrates the applicability of coupling these methods with computer-aided-design-based MC. Three quantities were calculated in this analysis: the total nuclear heating in the inboard leg of the toroidal field coils (TFCs), the prompt dose outside the biological shield, and the total neutron and gamma fluxes over a mesh tally covering the entire reactor. The use of FW-CADIS in estimating the nuclear heating in the inboard TFCs resulted in a factor of ~ 275 increase in the MC figure of merit (FOM) compared with analog MC and a factor of ~ 9 compared with the traditional methods of variance reduction. By providing a factor of ~ 21 000 increase in the MC FOM, the radiation dose calculation showed how the CADIS method can be effectively used in the simulation of problems that are practically impossible using analog MC. The total flux calculation demonstrated the ability of FW-CADIS to simultaneously enhance the MC statistical precision throughout the entire ITER geometry. Collectively, these calculations demonstrate the ability of the hybrid techniques to accurately model very challenging shielding problems in reasonable execution times.

Ibrahim, A. [University of Wisconsin; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL; Peplow, Douglas E. [ORNL; Sawan, M. [University of Wisconsin; Wilson, P. [University of Wisconsin; Wagner, John C [ORNL; Heltemes, Thad [University of Wisconsin, Madison

2011-01-01

69

Application of Exchange Monte Carlo Method to Ordering Dynamics

NASA Astrophysics Data System (ADS)

The ordering dynamics in the spinodal decomposition is an interesting problem. Especially, for the case of the conserved order parameter, it is difficult to determine the late-stage growth law due to the slow dynamics. We apply the exchange Monte Carlo method [1] to the ordering dynamics of the three-state Potts model with the conserved order parameter. Even for the deeply quenched case to low tempreatures, we have observed a rapid domain growth; we have proved the efficiency of the exchange Monte Carlo method for the ordering process. Although the exchange dynamics is not considered to be related to a real one, we have found that a domain growth is controlled by a simple algebraic growth law, R(t) ~ t^1/3. The value is consistent with a direct simulation [2] for the same model. [1] K. Hukushima and K. Nemoto, J. Phys. Soc. Jpn. 65, 1604 (1996). [2] C. Jeppesen and O. G. Mouritsen, Phys. Rev. B47, 14724 (1993).

Okabe, Yutaka

1997-08-01

70

Heavy Deformed Nuclei in the Shell Model Monte Carlo Method

We extend the shell model Monte Carlo approach to heavy deformed nuclei using a new proton-neutron formalism. The low excitation energies of such nuclei necessitate low-temperature calculations, for which a stabilization method is implemented in the canonical ensemble. We apply the method to study a well-deformed rare-earth nucleus, {sup 162}Dy. The single-particle model space includes the 50-82 shell plus 1f{sub 7/2} orbital for protons and the 82-126 shell plus 0h{sub 11/2}, 1g{sub 9/2} orbitals for neutrons. We show that the spherical shell model reproduces well the rotational character of {sup 162}Dy within this model space. We also calculate the level density of {sup 162}Dy and find it to be in excellent agreement with the experimental level density, which we extract from several experiments.

Alhassid, Y.; Fang, L. [Center for Theoretical Physics, Sloane Physics Laboratory, Yale University, New Haven, Connecticut 06520 (United States); Nakada, H. [Department of Physics, Graduate School of Science, Chiba University, Inage, Chiba 263-8522 (Japan)

2008-08-22

71

Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination

Objective To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Methods Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or ?-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a 252Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV D–D neutron generator can create neutrons at up to 1013 n s?1 with current technology. All these enable an effective and low-cost method of killing anthrax spores. Results There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. Conclusion The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g 252Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV D–D neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV D–D neutron generator output >1013 n s?1 should be attainable in the near future. This indicates that we could use a D–D neutron generator to sterilise anthrax contamination within several seconds.

Liu, B; Xu, J; Liu, T; Ouyang, X

2012-01-01

72

Explicitly restarted Arnoldi's method for Monte Carlo nuclear criticality calculations

NASA Astrophysics Data System (ADS)

A Monte Carlo implementation of explicitly restarted Arnoldi's method is developed for estimating eigenvalues and eigenvectors of the transport-fission operator in the Boltzmann transport equation. Arnoldi's method is an improvement over the power method which has been used for decades. Arnoldi's method can estimate multiple eigenvalues by orthogonalising the resulting fission sources from the application of the transport-fission operator. As part of implementing Arnoldi's method, a solution to the physically impossible---but mathematically real---negative fission sources is developed. The fission source is discretized using a first order accurate spatial approximation to allow for orthogonalization and normalization of the fission source required for Arnoldi's method. The eigenvalue estimates from Arnoldi's method are compared with published results for homogeneous, one-dimensional geometries, and it is found that the eigenvalue and eigenvector estimates are accurate within statistical uncertainty. The discretization of the fission sources creates an error in the eigenvalue estimates. A second order accurate spatial approximation is created to reduce the error in eigenvalue estimates. An inexact application of the transport-fission operator is also investigated to reduce the computational expense of estimating the eigenvalues and eigenvectors. The convergence of the fission source and eigenvalue in Arnoldi's method is analysed and compared with the power method. Arnoldi's method is superior to the power method for convergence of the fission source and eigenvalue because both converge nearly instantly for Arnoldi's method while the power method may require hundreds of iterations to converge. This is shown using both homogeneous and heterogeneous one-dimensional geometries with dominance ratios close to 1.

Conlin, Jeremy Lloyd

73

Grand-canonical Monte Carlo method for Donnan equilibria

NASA Astrophysics Data System (ADS)

We present a method that enables the direct simulation of Donnan equilibria. The method is based on a grand-canonical Monte Carlo scheme that properly accounts for the unequal partitioning of small ions on the two sides of a semipermeable membrane, and can be used to determine the Donnan electrochemical potential, osmotic pressure, and other system properties. Positive and negative ions are considered separately in the grand-canonical moves. This violates instantaneous charge neutrality, which is usually considered a prerequisite for simulations using the Ewald sum to compute the long-range charge-charge interactions. In this work, we show that if the system is neutral only in an average sense, it is still possible to get reliable results in grand-canonical simulations of electrolytes performed with Ewald summation of electrostatic interactions. We compare our Donnan method with a theory that accounts for differential partitioning of the salt, and find excellent agreement for the electrochemical potential, the osmotic pressure, and the salt concentrations on the two sides. We also compare our method with experimental results for a system of charged colloids confined by a semipermeable membrane and to a constant-NVT simulation method, which does not account for salt partitioning. Our results for the Donnan potential are much closer to the experimental results than the constant-NVT method, highlighting the important effect of salt partitioning on the Donnan potential.

Barr, S. A.; Panagiotopoulos, A. Z.

2012-07-01

74

Modeling granular phosphor screens by Monte Carlo methods

The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd{sub 2}O{sub 2}S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd{sub 2}O{sub 2}S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd{sub 2}O{sub 2}S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)

Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S. [Department of Medical Physics, Faculty of Medicine, University of Patras, 265 00 Patras (Greece); Department of Medical Instruments Technology, Technological Educational Institute, 122 10 Athens (Greece); Department of Medical Physics, Faculty of Medicine, University of Patras, 265 00 Patras (Greece)

2006-12-15

75

Monte Carlo simulation of photon coherent behavior in half Iinfinite turbid medium by scaling method

NASA Astrophysics Data System (ADS)

Monte Carlo simulation procedure is accelerated by scaling method based on baseline data from standard Monte Carlo calculation in turbid medium. Gaussian beam is modeled by hyperboloid of one sheet for actual condition to obtain distribution of photons on sample surface. Depth dependence coherent signal and photons distribution are calculated in this way, which is important to reconstruction of optical parameters by inverse Monte Carlo. Numerical results have verified this method in turbid medium of different optical parameters with acceptable relative errors.

Lin, Lin; Zhang, Mei; Liu, Huazhu

2012-12-01

76

LISA data analysis using Markov chain Monte Carlo methods

The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low-frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50 000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analysis, and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we supercool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions.

Cornish, Neil J.; Crowder, Jeff [Department of Physics, Montana State University, Bozeman, Montana 59717 (United States)

2005-08-15

77

LISA data analysis using Markov chain Monte Carlo methods

NASA Astrophysics Data System (ADS)

The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low-frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50 000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analysis, and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we supercool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions.

Cornish, Neil J.; Crowder, Jeff

2005-08-01

78

Seriation in paleontological data using markov chain Monte Carlo methods.

Given a collection of fossil sites with data about the taxa that occur in each site, the task in biochronology is to find good estimates for the ages or ordering of sites. We describe a full probabilistic model for fossil data. The parameters of the model are natural: the ordering of the sites, the origination and extinction times for each taxon, and the probabilities of different types of errors. We show that the posterior distributions of these parameters can be estimated reliably by using Markov chain Monte Carlo techniques. The posterior distributions of the model parameters can be used to answer many different questions about the data, including seriation (finding the best ordering of the sites) and outlier detection. We demonstrate the usefulness of the model and estimation method on synthetic data and on real data on large late Cenozoic mammals. As an example, for the sites with large number of occurrences of common genera, our methods give orderings, whose correlation with geochronologic ages is 0.95. PMID:16477311

Puolamäki, Kai; Fortelius, Mikael; Mannila, Heikki

2006-02-01

79

Seriation in Paleontological Data Using Markov Chain Monte Carlo Methods

Given a collection of fossil sites with data about the taxa that occur in each site, the task in biochronology is to find good estimates for the ages or ordering of sites. We describe a full probabilistic model for fossil data. The parameters of the model are natural: the ordering of the sites, the origination and extinction times for each taxon, and the probabilities of different types of errors. We show that the posterior distributions of these parameters can be estimated reliably by using Markov chain Monte Carlo techniques. The posterior distributions of the model parameters can be used to answer many different questions about the data, including seriation (finding the best ordering of the sites) and outlier detection. We demonstrate the usefulness of the model and estimation method on synthetic data and on real data on large late Cenozoic mammals. As an example, for the sites with large number of occurrences of common genera, our methods give orderings, whose correlation with geochronologic ages is 0.95.

Puolamaki, Kai; Fortelius, Mikael; Mannila, Heikki

2006-01-01

80

Convolution/superposition using the Monte Carlo method

NASA Astrophysics Data System (ADS)

The convolution/superposition calculations for radiotherapy dose distributions are traditionally performed by convolving polyenergetic energy deposition kernels with TERMA (total energy released per unit mass) precomputed in each voxel of the irradiated phantom. We propose an alternative method in which the TERMA calculation is replaced by random sampling of photon energy, direction and interaction point. Then, a direction is randomly sampled from the angular distribution of the monoenergetic kernel corresponding to the photon energy. The kernel ray is propagated across the phantom, and energy is deposited in each voxel traversed. An important advantage of the explicit sampling of energy is that spectral changes with depth are automatically accounted for. No spectral or kernel hardening corrections are needed. Furthermore, the continuous sampling of photon direction allows us to model sharp changes in fluence, such as those due to collimator tongue-and-groove. The use of explicit photon direction also facilitates modelling of situations where a given voxel is traversed by photons from many directions. Extra-focal radiation, for instance, can therefore be modelled accurately. Our method also allows efficient calculation of a multi-segment/multi-beam IMRT plan by sampling of beam angles and field segments according to their relative weights. For instance, an IMRT plan consisting of seven 14 × 12 cm2 beams with a total of 300 field segments can be computed in 15 min on a single CPU, with 2% statistical fluctuations at the isocentre of the patient's CT phantom divided into 4 × 4 × 4 mm3 voxels. The calculation contains all aperture-specific effects, such as tongue and groove, leaf curvature and head scatter. This contrasts with deterministic methods in which each segment is given equal importance, and the time taken scales with the number of segments. Thus, the Monte Carlo superposition provides a simple, accurate and efficient method for complex radiotherapy dose calculations.

Naqvi, Shahid A.; Earl, Matthew A.; Shepard, David M.

2003-07-01

81

Summary form only given. We propose a combination of the path-integral Monte Carlo method and the maximum entropy method as a comprehensive solution for the problem of pricing of derivative securities. The path-integral Monte Carlo approach relies on the probability distribution of the complete histories of the underlying security, from the present time to the contract expiration date. In our

M. S. Makivic

1996-01-01

82

Monte Carlo method for simulating optical coherence tomography signal in homogeneous turbid media

A Monte Carlo method for holistically simulating optical coherence tomography (OCT) has been developed. The geometrical optics implementation of OCT probe optical system was combined with Monte Carlo simulation for photon propagation in homogeneous turbid media to simulate OCT signal. The hyperboloid model to describe Gaussian beam's photon propagation made the simulation more accurate, and the importance sampling method has

Fengsheng Zhang; Matti Kinnunen; Alexey Popov; Risto Myllylä

2008-01-01

83

A Dedicated Circuit for Charged Particles Simulation Using the Monte Carlo Method

We present a dedicated integrated circuit for the simulation of charged particles based on Monte Carlo method. The Monte Carlo method leads to the solution of a particular form of the integro-differential Boltzmann equation (non-linear charge transport in semiconductors) permitting a direct statistical computation of the simulated particles distribution function in the phase space. This circuit should be the building

Andy Negoi; Stefan Bara; Jacques Zimmermann; Alain Guyot

1999-01-01

84

Simulation is often used to predict the response of gamma-ray spectrometers in technology viability and comparative studies for homeland and national security scenarios. Candidate radiation transport methods generally fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are the most heavily used in the detection community and are particularly effective for calculating pulse-height spectra

Leon E. Smith; Christopher J. Gesh; Richard T. Pagh; Erin A. Miller; Mark W. Shaver; Eric D. Ashbaker; Michael T. Batdorf; J. Edward Ellis; William R. Kaye; Ronald J. McConn; George H. Meriwether; Jennifer J. Ressler; Andrei B. Valsan; Todd A. Wareing

2008-01-01

85

Quantum Monte Carlo methods and lithium cluster properties

Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

Owen, R.K.

1990-12-01

86

Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters

Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

Owen, R.K.

1990-12-01

87

Studies in Reliability Theory and Survival Analysis and in Markov Chain Monte Carlo Methods.

National Technical Information Service (NTIS)

The focus of the work has been the development of Markov chain 'Monte Carlo' methods in Bayesian analysis, with emphasis on applications to survival or reliability data. We have emphasized the development of methods of dealing with analysis of sensitivity...

H. Doss

1998-01-01

88

A General Formulation of the Monte Carlo Method and 'Strong Laws' for Certain Sequential Schemes.

National Technical Information Service (NTIS)

The paper gives a new, general definition of the Monte Carlo method, couched in rigorous mathematical terms, which covers all known procedures which are described as 'Monte Carlo', and may suggest possible lines for future exploration of this field as a b...

J. J. Halton

1966-01-01

89

a Path Integral Monte Carlo Method for the Quasielastic Response

We formulate the quasielastic response of a non -relativistic many-body system at zero temperature in terms of ground state density matrix elements and real time path integrals that embody the final state interactions. While the former provide the weight for a conventional Monte Carlo calculation, the latter require a more sophisticated treatment. We argue that the recently developed Stationary Phase

Carlo Carraro

1990-01-01

90

Study of Certain Monte Carlo Search and Optimisation Methods.

National Technical Information Service (NTIS)

Studies are described which might lead to the development of a search and optimisation facility for the Monte Carlo criticality code MONK. The facility envisaged could be used to maximise a function of k-effective with respect to certain parameters of the...

C. Budd

1984-01-01

91

Capacity evaluation of MIMO systems by Monte-Carlo methods

In order to form the theoretical information foundation of multiple input and multiple output channel techniques and its applications, the analysis and simulations of the capacity of MIMO based on Monte-Carlo are presented in this paper. We can get a conclusion that with a circularly symmetric Gaussian transmit vector, the extremely high capacity of the Rayleigh fading MIMO channel can

Wang Chao; Wu Shunjun; Zhang Linrang; Tao Xiaoyan

2003-01-01

92

Criticality accident detector coverage analysis using the Monte Carlo Method.

National Technical Information Service (NTIS)

As a result of the need for a more accurate computational methodology, the Los Alamos developed Monte Carlo code MCNP is used to show the implementation of a more advanced and accurate methodology in criticality accident detector analysis. This paper will...

J. F. Zino K. C. Okafor

1993-01-01

93

NASA Astrophysics Data System (ADS)

This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.

Lodwick, Camille J.

94

This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform ''real'' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling

John C Wagner; Scott W Mosher; Thomas M Evans; Douglas E. Peplow; John A Turner

2010-01-01

95

This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling

John C Wagner; Scott W Mosher; Thomas M Evans; Douglas E. Peplow; John A Turner

2011-01-01

96

An analysis method for evaluating gradient-index fibers based on Monte Carlo method

NASA Astrophysics Data System (ADS)

We propose a numerical analysis method for evaluating gradient-index (GRIN) optical fiber using the Monte Carlo method. GRIN optical fibers are widely used in optical information processing and communication applications, such as an image scanner, fax machine, optical sensor, and so on. An important factor which decides the performance of GRIN optical fiber is modulation transfer function (MTF). The MTF of a fiber is swayed by condition of manufacturing process such as temperature. Actual measurements of the MTF of a GRIN optical fiber using this method closely match those made by conventional methods. Experimentally, the MTF is measured using a square wave chart, and is then calculated based on the distribution of output strength on the chart. In contrast, the general method using computers evaluates the MTF based on a spot diagram made by an incident point light source. But the results differ greatly from those by experiment. In this paper, we explain the manufacturing process which affects the performance of GRIN optical fibers and a new evaluation method similar to the experimental system based on the Monte Carlo method. We verified that it more closely matches the experimental results than the conventional method.

Yoshida, S.; Horiuchi, S.; Ushiyama, Z.; Yamamoto, M.

2011-05-01

97

Efficient, automated Monte Carlo methods for radiation transport

Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.

Kong Rong; Ambrose, Martin [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Spanier, Jerome [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Beckman Laser Institute and Medical Clinic, University of California, 1002 Health Science Road E., Irvine, CA 92612 (United States)], E-mail: jspanier@uci.edu

2008-11-20

98

Efficient, Automated Monte Carlo Methods for Radiation Transport

Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.

Kong, Rong; Ambrose, Martin; Spanier, Jerome

2012-01-01

99

Modelling of Indicatrix with Ellipsoidal Anisotropy by the Monte Carlo Method.

National Technical Information Service (NTIS)

A simple modelling method is proposed for the anisotropic scattering indicatrix of particles, which occurs in Monte-Carlo calculations for radiation interactions with matter. The degree of anisotropy can be chosen arbitrarily by means of a corresponding c...

D. A. Usikov

1975-01-01

100

National Technical Information Service (NTIS)

Kinetic Monte Carlo (KMC) simulation methods were utilized to study the grain growth and sintering of nanocrystalline metal compacts. Sintering is the process used to fabricate materials from powders by densifying the powder compact at elevated temperatur...

A. M. Hay

2011-01-01

101

Condition for relaxed Monte Carlo method of solving systems of linear equations

In this paper, we point out the limitation of the paper entitled “Solving Systems of Linear Equations with Relaxed Monte Carlo\\u000a Method” published in this journal (Tan in J. Supercomput. 22:113–123, 2002). We argue that the relaxed Monte Carlo method presented in Sect. 7 of the paper is only correct under the condition that\\u000a the coefficient matrix A must be diagonal

Guoming Lai; Xiaola Lin

2011-01-01

102

NASA Astrophysics Data System (ADS)

In particle transport computations, the Monte Carlo simulation method is a widely used algorithm. There are several Monte Carlo codes available that perform particle transport simulations. However the geometry packages and geometric modeling capability of Monte Carlo codes are limited as they can not handle complicated geometries made up of complex surfaces. Previous research exists that take advantage of the modeling capabilities of CAD software. The two major approaches are the Converter approach and the CAD engine based approach. By carefully analyzing the strategies and algorithms of these two approaches, the CAD engine based approach has peen identified as the more promising approach. Though currently the performance of this approach is not satisfactory, there is room for improvement. The development and implementation of an improved CAD based approach is the focus of this thesis. Algorithms to accelerate the CAD engine based approach are studied. The major acceleration algorithm is the Oriented Bounding Box algorithm, which is used in computer graphics. The difference in application between computer graphics and particle transport has been considered and the algorithm has been modified for particle transport. The major work of this thesis has been the development of the MCNPX/CGM code and the testing, benchmarking and implementation of the acceleration algorithms. MCNPX is a Monte Carlo code and CGM is a CAD geometry engine. A facet representation of the geometry provided the least slowdown of the Monte Carlo code. The CAD model generates the facet representation. The Oriented Bounding Box algorithm was the fastest acceleration technique adopted for this work. The slowdown of the MCNPX/CGM to MCNPX was reduced to a factor of 3 when the facet model is used. MCNPX/CGM has been successfully validated against test problems in medical physics and a fusion energy device. MCNPX/CGM gives exactly the same results as the standard MCNPX when an MCNPX geometry model is available. For the case of the complicated fusion device---the stellerator, the MCNPX/CGM's results closely match a one-dimension model calculation performed by ARIES team.

Wang, Mengkuo

103

NASA Astrophysics Data System (ADS)

We present a multiple-set overlapping-domain decomposed strategy for parallelizing the Monte Carlo Synthetic Acceleration method. Monte Carlo Synthetic Acceleration methods use the Neumann-Ulam class of Monte Carlo solvers for linear systems to accelerate a fixed-point iteration sequence. Effective parallel algorithms for these methods require the parallelization of the underlying Neumann-Ulam solvers. To do this in a domain decomposed environment, we borrow strategies traditionally implemented in Monte Carlo particle transport to parallelize the problem. The parallel Neumann-Ulam and multiple-set overlapping-domain decomposition algorithms are presented along with parallel scaling data for the resulting implementation using the Titan Cray XK7 machine at Oak Ridge National Laboratory.

Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

2014-06-01

104

Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.

Siswantoro, Joko; Idrus, Bahari

2014-01-01

105

Monte carlo method with heuristic adjustment for irregularly shaped food product volume measurement.

Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method. PMID:24892069

Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari

2014-01-01

106

Monte Carlo method for multiparameter estimation in coupled chaotic systems.

We address the problem of estimating multiple parameters of a chaotic dynamical model from the observation of a scalar time series. We assume that the series is produced by a chaotic system with the same functional form as the model, so that synchronization between the two systems can be achieved by an adequate coupling. In this scenario, we propose an efficient Monte Carlo optimization algorithm that iteratively updates the model parameters in order to minimize the synchronization error. As an example, we apply it to jointly estimate the three static parameters of a chaotic Lorenz system. PMID:18233798

Mariño, Inés P; Míguez, Joaquín

2007-11-01

107

Improved directed loop method for Quantum Monte Carlo simulations

NASA Astrophysics Data System (ADS)

Efficient schemes called directed loops for cluster updates in Quantum Monte Carlo simulations have recently been proposed, which improve both the efficiency and precision for simulations of quantum models. In this work we show that local detailed balance is not necessary in order to fulfill global detailed balance during the construction of these loops. We therefore propose to insert additional degrees of freedom into the directed loop equations, resulting in even more efficient algorithms. Our approach works directly in the natural representation of an extended space where the matrix elements of the operators defining the worm (broken worldline segments) are taken into account.

Alet, Fabien; Wessel, Stefan; Troyer, Matthias

2003-11-01

108

Monte Carlo and Quasi-Monte Carlo Methods in Derivative Financial Pricing

In this dissertation, we discuss the generation of low discrepancy sequences, randomization of these sequences, and the transformation methods to generate normally distributed random variables. Two well known methods for generating normally distributed numbers are considered, namely; Box-Muller and inverse transformation methods. Some researchers and financial engineers have claimed that it is incorrect to use the Box-Muller method with low-discrepancy

Ahmet Goncu

2009-01-01

109

Using a semi-guided Monte Carlo method for faster simulation of forced outages of generating units

The authors describe a semi-guided Monte Carlo technique to schedule forced outage periods of generating units. The key advantage of this method lies in its ability to create forced-outage schedules that are statistically balanced. When tested against the pure Monte-Carlo method, the semi-guided Monte Carlo technique provided much faster convergence and greatly reduced the number of Monte Carlo iterations required

A. Scully; A. Harpur; K. D. Le; J. T. Day; M. J. Malone; T. E. Mousseau

1992-01-01

110

Digitally reconstructed radiograph generation by an adaptive Monte Carlo method

NASA Astrophysics Data System (ADS)

Digitally reconstructed radiograph (DRR) generation is an important step in several medical imaging applications such as 2D-3D image registration, where the generation of DRR is a rate-limiting step. We present a novel DRR generation technique, called the adaptive Monte Carlo volume rendering (AMCVR) algorithm. It is based on the conventional Monte Carlo volume rendering (MCVR) technique that is very efficient for rendering large medical datasets. In contrast to the MCVR, the AMCVR does not produce sample points by sampling directly in the entire volume domain. Instead, it adaptively divides the entire volume domain into sub-domains using importance separation and then performs sampling in these sub-domains. As a result, the AMCVR produces almost the same image quality as that obtained with the MCVR while only using half samples, and increases projection speed by a factor of 2. Moreover, the AMCVR is suitable for fast memory addressing, which further improves processing speed. Independent of the size of medical datasets, the AMCVR allows for achieving a frame rate of about 15 Hz on a 2.8 GHz Pentium 4 PC while generating reasonably good quality DRR.

Li, Xiaoliang; Yang, Jie; Zhu, Yuemin

2006-06-01

111

Advanced computational methods for nodal diffusion, Monte Carlo, and S[sub N] problems

This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. A alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.

Martin, W.R.

1993-01-01

112

In the present study, the energy dependence of response of some popular thermoluminescent dosemeters (TLDs) have been investigated such as LiF:Mg,Ti, LiF:Mg,Cu,P and CaSO(4):Dy to synchrotron radiation in the energy range of 10-34 keV. The study utilised experimental, Monte Carlo and analytical methods. The Monte Carlo calculations were based on the EGSnrc and FLUKA codes. The calculated energy response of all the TLDs using the EGSnrc and FLUKA codes shows excellent agreement with each other. The analytically calculated response shows good agreement with the Monte Carlo calculated response in the low-energy region. In the case of CaSO(4):Dy, the Monte Carlo-calculated energy response is smaller by a factor of 3 at all energies in comparison with the experimental response when polytetrafluoroethylene (PTFE) (75 % by wt) is included in the Monte Carlo calculations. When PTFE is ignored in the Monte Carlo calculations, the difference between the calculated and experimental response decreases (both responses are comparable >25 keV). For the LiF-based TLDs, the Monte Carlo-based response shows reasonable agreement with the experimental response. PMID:20308052

Bakshi, A K; Chatterjee, S; Palani Selvam, T; Dhabekar, B S

2010-07-01

113

Equivalence of four Monte Carlo methods for photon migration in turbid media.

In the field of photon migration in turbid media, different Monte Carlo methods are usually employed to solve the radiative transfer equation. We consider four different Monte Carlo methods, widely used in the field of tissue optics, that are based on four different ways to build photons' trajectories. We provide both theoretical arguments and numerical results showing the statistical equivalence of the four methods. In the numerical results we compare the temporal point spread functions calculated by the four methods for a wide range of the optical properties in the slab and semi-infinite medium geometry. The convergence of the methods is also briefly discussed. PMID:23201658

Sassaroli, Angelo; Martelli, Fabrizio

2012-10-01

114

Order N cluster Monte Carlo method for spin systems with long-range interactions

An efficient O(N) cluster Monte Carlo method for Ising models with long-range interactions is presented. Our novel algorithm does not introduce any cutoff for interaction range and thus it strictly fulfills the detailed balance. The realized stochastic dynamics is equivalent to that of the conventional Swendsen–Wang algorithm, which requires O(N2) operations per Monte Carlo sweep if applied to long-range interacting

Kouki Fukui; Synge Todo

2009-01-01

115

Time-step limits for a Monte Carlo Compton-scattering method

We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.

Densmore, Jeffery D [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Lowrie, Robert B [Los Alamos National Laboratory

2009-01-01

116

Bold Diagrammatic Monte Carlo Method Applied to Fermionized Frustrated Spins

NASA Astrophysics Data System (ADS)

We demonstrate, by considering the triangular lattice spin-1/2 Heisenberg model, that Monte Carlo sampling of skeleton Feynman diagrams within the fermionization framework offers a universal first-principles tool for strongly correlated lattice quantum systems. We observe the fermionic sign blessing—cancellation of higher order diagrams leading to a finite convergence radius of the series. We calculate the magnetic susceptibility of the triangular-lattice quantum antiferromagnet in the correlated paramagnet regime and reveal a surprisingly accurate microscopic correspondence with its classical counterpart at all accessible temperatures. The extrapolation of the observed relation to zero temperature suggests the absence of the magnetic order in the ground state. We critically examine the implications of this unusual scenario.

Kulagin, S. A.; Prokof'ev, N.; Starykh, O. A.; Svistunov, B.; Varney, C. N.

2013-02-01

117

Monte Carlo Methods to Model Radiation Interactions and Induced Damage

NASA Astrophysics Data System (ADS)

This review is devoted to the analysis of some Monte Carlo (MC) simulation programmes which have been developed to describe radiation interaction with biologically relevant materials. Current versions of the MC codes Geant4 (GEometry ANd Tracking 4), PENELOPE (PENetration and Energy Loss of Positrons and Electrons), EPOTRAN (Electron and POsitron TRANsport), and LEPTS (Low-Energy Particle Track Simulation) are described. Mean features of each model, as the type of radiation to consider, the energy range covered by primary and secondary particles, the type of interactions included in the simulation and the considered target geometries are discussed. Special emphasis lies on recent developments that, together with (still emerging) new databases that include adequate data for biologically relevant materials, bring us continuously closer to a realistic, physically meaningful description of radiation damage in biological tissues.

Muñoz, Antonio; Fuss, Martina C.; Cortés-Giraldo, M. A.; Incerti, Sébastien; Ivanchenko, Vladimir; Ivanchenko, Anton; Quesada, J. M.; Salvat, Francesc; Champion, Christophe; Gómez-Tejedor, Gustavo García

118

Heterogeneity in ultrathin films simulated by Monte Carlo method

NASA Astrophysics Data System (ADS)

The 3D composition profile of ultra-thin Pd films on Cu(001) has been experimentally determined using low energy electron microscopy (LEEM).^[1] Quantitative measurements of the alloy concentration profile near steps show that the Pd distribution in the 3^rd layer is heterogeneous due to step overgrowth during Pd deposition. Interestingly, the Pd distribution in the 2^nd layer is also heterogeneous, and appears to be correlated with the distribution in the 1^st layer. We describe Monte Carlo simulations that show that correlation is due to Cu-Pd attraction, and that the 2^nd layer Pd is, in fact, laterally equilibrated. By comparing measured and simulated concentration profiles, we can estimate this attraction within a simple bond counting model. [1] J. B. Hannon, J. Sun, K. Pohl, G. L. Kellogg, Phys. Rev. Lett. 96, 246103 (2006)

Sun, Jiebing; Hannon, James B.; Kellogg, Gary L.; Pohl, Karsten

2007-03-01

119

Estimation of the measurement uncertainty based on quasi Monte-Carlo method in optical measurement

NASA Astrophysics Data System (ADS)

Because measurement uncertainty is an important parameter to evaluate the reliability of measurement results, it is essential to present reliable methods to evaluate the measurement uncertainty especially in precise optical measurement. Though Monte-Carlo (MC) method has been applied to estimate the measurement uncertainty in recent years, this method, however, has some shortcomings such as low convergence and unstable results. Therefore its application is limited. To evaluate the measurement uncertainty in a fast and robust way, Quasi Monte-Carlo (QMC) method is adopted in this paper. In the estimating process, more homogeneous random numbers (quasi random numbers) are generated based on Halton's sequence, and then these random numbers are transformed into the desired distribution random numbers. An experiment of cylinder measurement is given. The results show that the Quasi Monte-Carlo method has higher convergence rate and more stable evaluation results than that of Monte-Carlo method. Therefore, the quasi Monte-Carlo method can be applied efficiently to evaluate the measurement uncertainty.

Jing, Hui; Huang, Mei-fa; Zhong, Yan-ru; Kuang, Bing; Jiang, Xiang-qian

2008-03-01

120

A Monte Carlo Synthetic-Acceleration Method for Solving the Thermal Radiation Diffusion Equation

We present a novel synthetic-acceleration based Monte Carlo method for solving the equilibrium thermal radiation diusion equation in three dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that not only can our Monte Carlo method be an eective solver for sparse matrix systems, but also that it performs competitively with deterministic methods including preconditioned Conjugate Gradient while producing numerically identical results. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.

Evans, Thomas M [ORNL] [ORNL; Mosher, Scott W [ORNL] [ORNL; Slattery, Stuart [University of Wisconsin, Madison] [University of Wisconsin, Madison

2014-01-01

121

MONTE CARLO RESEARCH SERIES: PRE-SAMPLED ENERGY DEPOSITION METHOD FOR GAMMA HEATING CALCULATIONS

A simplified method for Monte Carlo gamma heating computation is ; described. This method uses pre-sampled energy deposition data rather than ; energy deposition values which are generated during computation. The method is ; restricted to systems with a maximum of twenty regions. (auth);

J. R. Beeler; M. D. McDonald; J. F. Quinlan

1959-01-01

122

MONTE CARLO RESEARCH SERIES: PRE-SAMPLED ENERGY DEPOSITION METHOD FOR GAMMA HEATING CALCULATIONS

A method for Monte Carlo gamma heating computation is described. This ; method uses pre-sampled energy deposition data rather than energy deposition ; values which are generated during computation. The method is restricted to ; systems with a maximum of twenty space regions. (auth);

J. R. Beeler; M. D. McDonald; J. F. Quinlan

1961-01-01

123

Multivariate Monte Carlo Methods for the Reflection Grating Spectrometers on XMM-Newton

We propose a novel multivariate Monte Carlo method as an efficient and flexible approach to analyzing extended X-ray sources with the Reflection Grating Spectrometer (RGS) on XMM Newton. A multi-dimensional interpolation method is used to efficiently calculate the response function for the RGS in conjunction with an arbitrary spatially-varying spectral model. Several methods of event comparison that effectively compare the multivariate RGS data are discussed. The use of a multi-dimensional instrument Monte Carlo also creates many opportunities for the use of complex astrophysical Monte Carlo calculations in diffuse X-ray spectroscopy. The methods presented here could be generalized to other X-ray instruments as well.

Peterson, J.

2004-11-10

124

Prediction of Protein-DNA binding by Monte Carlo method

NASA Astrophysics Data System (ADS)

We present an analysis and prediction of protein-DNA binding specificity based on the hydrogen bonding between DNA, protein, and auxillary clusters of water molecules. Zif268, glucocorticoid receptor, ?-repressor mutant, HIN-recombinase, and tramtrack protein-DNA complexes are studied. Hydrogen bonds are approximated by the Lennard-Jones potential with a cutoff distance between the hydrogen and the acceptor atoms set to 3.2 Åand an angular component based on a dipole-dipole interaction. We use a three-stage docking algorithm: geometric hashing that matches pairs of hydrogen bonding sites; (2) least-squares minimization of pairwise distances to filter out insignificant matches; and (3) Monte Carlo stochastic search to minimize the energy of the system. More information can be obtained from our first paper on this subject [Y.Deng et all, J.Computational Chemistry (1995)]. Results show that the biologically correct base pair is selected preferentially when there are two or more strong hydrogen bonds (with LJ potential lower than -0.20) that bind it to the protein. Predicted sequences are less stable in the case of weaker bonding sites. In general the inclusion of water bridges does increase the number of base pairs for which correct specificity is predicted.

Deng, Yuefan; Eisenberg, Moises; Korobka, Alex

1997-08-01

125

Methods of Monte Carlo electron transport in particle-in-cell codes

An algorithm has been implemented in CCUBE and ISIS to treat electron transport in materials using a Monte Carlo method in addition to the electron dynamics determined by the self-consistent electromagnetic, relativistic, particle-in-cell simulation codes that have been used extensively to model generation of electron beams and intense microwave production. Incorporation of a Monte Carlo method to model the transport of electrons in materials (conductors and dielectrics) in a particle-in-cell code represents a giant step toward realistic simulation of the physics of charged-particle beams. The basic Monte Carlo method used in the implementation includes both scattering of electrons by background atoms and energy degradation.

Kwan, T.J.T.; Snell, C.M.

1985-01-01

126

A comparative study of Monte-Carlo methods for multitarget tracking

In this paper, we address the problem of tracking an unknown and time varying number of targets and their states from noisy observationsavailable at discrete intervals oftime. Attentionhas recently focused on the role of simulation-based approaches, including Monte Carlo methods, in solving multitarget tracking problem, as these methods are able to perform well for nonlinear and non-Gaussian data models. In

Francois Septier; Julien Cornebise; Simon Godsill; Yves Delignon

2011-01-01

127

Auxiliary-Field Quantum Monte Carlo Method for Strongly Paired Fermions.

National Technical Information Service (NTIS)

We solve the zero-temperature unitary Fermi gas problem by incorporating a BCS importance function into the auxiliary-field quantum Monte Carlo method. We demonstrate that this method does not suffer from a sign problem and that it increases the efficienc...

J. Carlson K. E. Schmidt S. Gandolfi S. Zhang

2011-01-01

128

A Quasi-Monte Carlo Method for Elliptic Boundary Value Problems

In this paper we present and analyze a quasi-Monte Carlo method for solving elliptic bound- ary value problems. Our method transforms the given partial differential equation into an integral equation by employing a well known local integral representation. The kernel in this integral equation representation can be used as a transition density function to define a Markov process used in

Michael Mascagni; Aneta Karaivanova; Yaohang Li

2001-01-01

129

National Technical Information Service (NTIS)

A method based on Schwinger-Dyson equations for calculating block renormalized couplings by Monte Carlo renormalization group is presented. A numerical study for the 2-dimensional 0(3) nonlinear sigma-model is done. Results show that the method can be saf...

M. Falcioni G. Martinelli M. L. Paciello G. Parisi B. Taglienti

1985-01-01

130

Monte Carlo method was used for calculation of finite-diameter laser distribution in tissues through convolution operation. Photo-thermal ablation model was set up on the basis of Pennes bioheat equation, and tissue temperature distribution was simulated by using finite element method by ANSYS through the model. The simulation result is helpful for clinical application of laser. PMID:24195389

Wang, Yafen; Bai, Jingfeng

2013-07-01

131

A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition

A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction

Zheming Zheng; Ryan M. Stephens; Richard D. Braatz; Richard C. Alkire; Linda R. Petzold

2008-01-01

132

Comparison of the Monte Carlo adjoint-weighted and differential operator perturbation methods

Two perturbation theory methodologies are implemented for k-eigenvalue calculations in the continuous-energy Monte Carlo code, MCNP6. A comparison of the accuracy of these techniques, the differential operator and adjoint-weighted methods, is performed numerically and analytically. Typically, the adjoint-weighted method shows better performance over a larger range; however, there are exceptions.

Kiedrowski, Brian C [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory

2010-01-01

133

A Monte Carlo method for uncertainty evaluation implemented on a distributed computing system

NASA Astrophysics Data System (ADS)

This paper is concerned with bringing together the topics of uncertainty evaluation using a Monte Carlo method, distributed computing for data parallel applications and pseudo-random number generation. A study of a measurement system to estimate the absolute thermodynamic temperatures of two high-temperature blackbodies by measuring the ratios of their spectral radiances is used to illustrate the application of these topics. The uncertainties associated with the estimates of the temperatures are evaluated and used to inform the experimental realization of the system. The difficulties associated with determining model sensitivity coefficients, and demonstrating whether a linearization of the model is adequate, are avoided by using a Monte Carlo method as an approach to uncertainty evaluation. A distributed computing system is used to undertake the Monte Carlo calculation because the computational effort required to evaluate the measurement model can be significant. In order to ensure that the results provided by a Monte Carlo method implemented on a distributed computing system are reliable, consideration is given to the approach to generating pseudo-random numbers, which constitutes a key component of the Monte Carlo procedure.

Esward, T. J.; de Ginestous, A.; Harris, P. M.; Hill, I. D.; Salim, S. G. R.; Smith, I. M.; Wichmann, B. A.; Winkler, R.; Woolliams, E. R.

2007-10-01

134

An Efficient Monte-Carlo Method for Calculating Free Energy in Long-Range Interacting Systems

We present an efficient Monte-Carlo method for long-range interacting systems to calculate free energy as a function of an order parameter. In this method, a variant of the Wang--Landau method regarding the order parameter is combined with the stochastic cutoff method, which has recently been developed for long-range interacting systems. This method enables us to calculate free energy in long-range

Kazuya Watanabe; Munetaka Sasaki

2011-01-01

135

Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.

Perfetti, Christopher M [ORNL; Martin, William R [University of Michigan; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL

2012-01-01

136

Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.

McKinley, M S; Brooks III, E D; Daffin, F

2004-12-13

137

A new method to assess the statistical convergence of monte carlo solutions

Accurate Monte Carlo confidence intervals (CIs), which are formed with an estimated mean and an estimated standard deviation, can only be created when the number of particle histories N becomes large enough so that the central limit theorem can be applied. The Monte Carlo user has a limited number of marginal methods to assess the fulfillment of this condition, such as statistical error reduction proportional to 1/{radical}N with error magnitude guidelines and third and fourth moment estimators. A new method is presented here to assess the statistical convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores. Related work in this area includes the derivation of analytic score distributions for a two-state Monte Carlo problem. Score distribution histograms have been generated to determine when a small number of histories accounts for a large fraction of the result. This summary describes initial studies of empirical Monte Carlo history score PDFs created from score histograms of particle transport simulations. 7 refs., 1 fig.

Forster, R.A.

1991-01-01

138

Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.

Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.

2008-10-31

139

REVIEWS OF TOPICAL PROBLEMS: The Monte Carlo method in lattice gauge theories

NASA Astrophysics Data System (ADS)

Applications of the Monte Carlo method in lattice gauge theories, including applications in quantum chromodynamics, are reviewed. The lattice formulation of gauge theories, the corresponding concepts, and the corresponding methods are introduced. The Monte Carlo method as it is applied to lattice gauge theories is described. Some specific calculations by the Monte Carlo method and their results are examined. The phase structure of lattice gauge theories with Abelian groups ZN and U(1) (a lattice formulation of a compact electrodynamics) is discussed. The non-Abelian groups SU(2), SU(3) (a lattice formulation of quantum chromodynamics), and others are also discussed. The procedure for calculating quantities referring to the continuum limit by the Monte Carlo method is discussed for quantum chromodynamics. A detailed analysis is made of results calculated for the continuum theory: string tensions and interaction potentials, which show that quarks are confined; glueball mass spectra; and the temperature of the transition from the phase of hadronic matter to the phase of a quark-gluon plasma. Masses calculated for hadrons consisting of quarks are briefly discussed.

Makeenko, Yu M.

1984-06-01

140

NASA Astrophysics Data System (ADS)

A thermodynamically guided atomistic Monte Carlo methodology is presented for simulating systems beyond equilibrium by expanding the statistical ensemble to include a tensorial variable accounting for the overall structure of the system subjected to flow. For a given shear rate, the corresponding tensorial conjugate field is determined iteratively through independent nonequilibrium molecular dynamics simulations. Test simulations for the effect of flow on the conformation of a C50H102 polyethylene liquid show that the two methods (expanded Monte Carlo and nonequilibrium molecular dynamics) provide identical results.

Baig, C.; Mavrantzas, V. G.

2007-12-01

141

Bose condensation of two-dimensional dipolar excitons: Simulation by the quantum Monte Carlo method

The Bose condensation of two-dimensional dipolar excitons in quantum wells is numerically studied by the diffusion Monte Carlo simulation method. The correlation, microscopic, thermodynamic, and spectral characteristics are calculated. It is shown that, in structures of coupled quantum wells, in which low-temperature features of exciton luminescence have presently been observed, dipolar excitons form a strongly correlated system.

Lozovik, Yu. E.; Kurbakov, I. L. [Russian Academy of Sciences, Institute of Spectroscopy (Russian Federation); Astrakharchik, G. E. [Polytechnic University of Catalonia E-08034 (Spain); Willander, M. [Linkoeping University SE-581 83, Institute of Science and Technology (ITN) (Sweden)], E-mail: lozovik@isan.troitsk.ru

2008-02-15

142

FPGA-driven pseudorandom number generators aimed at accelerating Monte Carlo methods

Hardware acceleration in high performance computing (HPC) context is of growing interest, particularly in the field of Monte Carlo methods where the resort to field programmable gate array (FPGA) technology has been proven as an effective media, capable of enhancing by several orders the speed execution of stochastic processes. The spread-use of reconfigurable hardware for stochastic simulation gathered a significant

Tarek Ould Bachir; Jean-Jules Brault

2009-01-01

143

Bayesian Phylogenetic Inference Using DNA Sequences: A Markov Chain Monte Carlo Method

An improved Bayesian method is presented for estimating phylogenetic trees using DNA sequence data. The birth- death process with species sampling is used to specify the prior distribution of phylogenies and ancestral speciation times, and the posterior probabilities of phylogenies are used to estimate the maximum posterior probability (MAP) tree. Monte Carlo integration is used to integrate over the ancestral

Ziheng Yang; Bruce Rannala

144

An Enskog based Monte Carlo method for high Knudsen number non-ideal gas flows

A Monte Carlo method based on the Enskog equation for dense gas is developed by considering high density effect on collision rates and both repulsive and attractive molecular interactions for a Lennard–Jones fluid. The appropriate internal energy exchange model is introduced with consistency with the collision model. The equation of state for a non-ideal gas is therefore derived involving the

Moran Wang; Zhixin Li

2007-01-01

145

Monte Carlo calculations of pressure vessel (PV) neutron fluence have been performed to benchmark discrete ordinates (SN) transport methods. These calculations, along with measured data at the ex-vessel cavity dosimeter, provide a means to examine various uncertainties associated with the SN transport calculations. For the purpose of the PV fluence calculations, synthesized 3-D deterministic models are shown to produce results

J. C. Wagner; A. Haghighat; B. G. Petrovic; H. L. Hanshaw

146

The objective of this paper is to assess the chronological variations in the available transfer capability (ATC) caused by uncertainties associated with hourly load fluctuations and equipment unavailabilities. The system states resulting from these uncertainties are generated using the Monte Carlo method with sequential simulation (MCMSS). The ATC for each generated state is evaluated through a linear dc optimal power

Anselmo B. Rodrigues; Maria G. Da Silva

2007-01-01

147

Calculations of the structural parameters of aqueous solutions of propanol by the Monte Carlo method

NASA Astrophysics Data System (ADS)

Aqueous solutions of propanol were systematically studied by the Monte Carlo method over a wide concentration range at 273 K. The radial distribution functions were calculated and analyzed. This allowed us to perform a detailed analysis of changes in the local structure in the propanol-water system as the content of the alcohol in water increased.

Atamas', A. A.; Atamas', N. A.; Bulavin, L. A.

2009-05-01

148

A new measure of irregularity of distribution and quasi-Monte Carlo methods for global optimization

Measures of irregularity of distribution, such as discrepancy and dispersion, play a major role in quasi-Monte Carlo methods for integration and optimization. In this paper, a new measure of irregularity of distribution, called volume-dispersion, is introduced. Its relation to the discrepancy and traditional dispersion, and its applications in global optimization problems are investigated. Optimization errors are bounded in terms of

Xiaoqun Wang

2002-01-01

149

Blind Data Detection in the Presence of PLL Phase Noise by Sequential Monte Carlo Method

In this paper, based on a sequential Monte Carlo method, a computationally efficient algorithm is presented for blind data detection in the presence of residual phase noise generated at the output the phase tracking loop employed in a digital receiver. The basic idea is to treat the transmitted symbols as \\

E. Panayirci; H. A. Cirpan; M. Moeneclaey; N. Noels

2006-01-01

150

Reaction-field and Ewald summation methods in Monte Carlo simulations of dipolar liquid crystals

The treatment of the long-range dipolar interactions in simulations of mesogens is examined. After a brief reformulation of the standard Ewald summation and reaction-field methods in the general context of electrostatics using Green functions, we report the results of Monte Carlo simulations of liquid crystalline phases for L \\/D = 5 hard spherocylinders (cylinder length L and diameter D )

Alejandro Gil-Villegas; Simon C. McGrother; George Jackson

1997-01-01

151

Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water

ERIC Educational Resources Information Center

Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"…

Gergely, John Robert

2009-01-01

152

Application of the Monte-Carlo method in design of laser pumping systems

A mathematical model of a laser pumping system is constructed based on principles of the Monte-Carlo method. The model is employed in a proposed universal technique for analyzing operating conditions in pumping system of arbitrary configuration, permitting choice of optimal dimensions and parameters.

V. G. Dorogov; A. A. Shcherbakov; A. V. Iakovlev

1979-01-01

153

Methods for Monte Carlo simulation of the exospheres of the moon and Mercury

A general form of the integral equation of exospheric transport on moon-like bodies is derived in a form that permits arbitrary specification of time varying physical processes affecting atom creation and annihilation, atom-regolith collisions, adsorption and desorption, and nonplanetocentric acceleration. Because these processes usually defy analytic representation, the Monte Carlo method of solution of the transport equation, the only viable

R. RICHARD HODGES JR

1980-01-01

154

Construction project schedule risk analysis and assessment using Monte Carlo simulation method

Construction projects are initiated in complex and dynamic environments resulting in circumstances of high uncertainty and risk, which are compounded by demanding time constraints. This paper describes an application of Monte Carlo simulation method to consider and quantify uncertainty in construction scheduling. This paper points out the importance of the construction project scheduling risk analysis and assessment from the perspective

Dezhi Jin; Zhuofu Wang; Xun Liu; Jianming Yang; Meigui Han

2010-01-01

155

A kinetic Monte Carlo method for the simulation of massive phase transformations

A multi-lattice kinetic Monte Carlo method has been developed for the atomistic simulation of massive phase transformations. Beside sites on the crystal lattices of the parent and product phase, randomly placed sites are incorporated as possible positions. These random sites allow the atoms to take favourable intermediate positions, essential for a realistic description of transformation interfaces. The transformation from fcc

C. Bos; F. Sommer; E. J. Mittemeijer

2004-01-01

156

Self-learning kinetic Monte Carlo method: Application to Cu(111)

We present a method of performing kinetic Monte Carlo simulations that does not require an a priori list of diffusion processes and their associated energetics and reaction rates. Rather, at any time during the simulation, energetics for all possible (single- or multiatom) processes, within a specific interaction range, are either computed accurately using a saddle-point search procedure, or retrieved from

Oleg Trushin; Altaf Karim; Abdelkader Kara; Talat S. Rahman

2005-01-01

157

A study of orientational disorder in ND 4Cl by the reverse Monte Carlo method

The total structure factor for deuterated ammonium chloride measured by neutron diffraction has been modeled using the reverse Monte Carlo method. The results show that the orientational disorder of the ammonium ions consists of a local librational motion with an average angular amplitude ?=17° and reorientations of ammonium ions by 90° jumps around two-fold axes. Reorientations around three-fold axes have

A. V. Belushkin; D. P. Kozlenko; R. L. McGreevy; B. N. Savenko; P. Zetterström

1999-01-01

158

Probabilistic Short-Circuit Analysis by Monte Carlo Simulations and Analytical Methods

The probability distribution of short-circuit currents in a bus or a section of a line represents basic information in planning, reliability, and risk assessment studies. This paper presents two methods for the development of the probability distribution of fault currents. The first one is based upon a Monte Carlo simulation while the second utilizes analytical expressions derived from probability theory.

A. Balouktsis; D. Tsanakas; G. Vachtsevanos

1986-01-01

159

An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.

ERIC Educational Resources Information Center

The accuracy of the Markov chain Monte Carlo procedure, Gibbs sampling, was considered for estimation of item and ability parameters of the one-parameter logistic model. Four data sets were analyzed to evaluate the Gibbs sampling procedure. Data sets were also analyzed using methods of conditional maximum likelihood, marginal maximum likelihood,…

Kim, Seock-Ho

160

A methodology to use the solution of a deterministic multigroup S{sub n} transport code as the source distribution for a Monte Carlo criticality calculation is discussed. This methodology is referred to as the S{sub n} Source Initialization method. The effectiveness of this methodology is measured by simulating a loosely coupled benchmark, where standard Monte Carlo codes have shown a bias. The Monte Carlo N-Particle transport code MCNP and PENTRAN (three-dimensional parallel, multigroup, S{sub n}) transport code system were used. The PENTRAN code solution is used as a starting source distribution in the MCNP code, thereby decreasing the necessary active cycle length. The methodology was verified on the basis of a sample benchmark problem of a lattice of 5x5x1 highly enriched uranium metal spheres surrounded by air.

Wenner, Michael T.; Haghighat, Alireza; Gardner, Shane

2001-06-17

161

Object Tracking based on Snake and Sequential Monte Carlo Method

Snake has found a number of applications in recent years in computer vision. A snake is an elastic curve, which dynamically adjusts its initial position to the object shape. Snake is sensitive to parameters values and initialization, and moreover, it is a popular method for object contour localization while not suitable for state estimation in time series. This paper presents

Hui Tan; Xinmeng Chen; Min Jiang

2006-01-01

162

A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition

A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction whenever that is necessary. The HMKMC method is seen to be accurate and highly efficient.

Zheng Zheming [Department of Mechanical Engineering, University of California Santa Barbara, Santa Barbara, CA 93106 (United States); Stephens, Ryan M.; Braatz, Richard D.; Alkire, Richard C. [Department of Chemical and Biomolecular Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); Petzold, Linda R. [Department of Mechanical Engineering, University of California Santa Barbara, Santa Barbara, CA 93106 (United States)], E-mail: petzold@cs.ucsb.edu

2008-05-01

163

NASA Astrophysics Data System (ADS)

This study evaluated the Monte Carlo method for determining the dose calculation in fluoroscopy by using a realistic human phantom. The dose was calculated by using Monte Carlo N-particle extended (MCNPX) in simulations and was measured by using Korean Typical Man-2 (KTMAN-2) phantom in the experiments. MCNPX is a widely-used simulation tool based on the Monte-Carlo method and uses random sampling. KTMAN-2 is a virtual phantom written in MCNPX language and is based on the typical Korean man. This study was divided into two parts: simulations and experiments. In the former, the spectrum generation program (SRS-78) was used to obtain the output energy spectrum for fluoroscopy; then, each dose to the target organ was calculated using KTMAN-2 with MCNPX. In the latter part, the output of the fluoroscope was calibrated first and TLDs (Thermoluminescent dosimeter) were inserted in the ART (Alderson Radiation Therapy) phantom at the same places as in the simulation. Thus, the phantom was exposed to radiation, and the simulated and the experimental doses were compared. In order to change the simulation unit to the dose unit, we set the normalization factor (NF) for unit conversion. Comparing the simulated with the experimental results, we found most of the values to be similar, which proved the effectiveness of the Monte Carlo method in fluoroscopic dose evaluation. The equipment used in this study included a TLD, a TLD reader, an ART phantom, an ionization chamber and a fluoroscope.

Kim, Minho; Lee, Hyounggun; Kim, Hyosim; Park, Hongmin; Lee, Wonho; Park, Sungho

2014-03-01

164

A Hamiltonian Monte–Carlo method for Bayesian inference of supermassive black hole binaries

NASA Astrophysics Data System (ADS)

We investigate the use of a Hamiltonian Monte–Carlo to map out the posterior density function for supermassive black hole binaries. While previous Markov Chain Monte–Carlo (MCMC) methods, such as Metropolis–Hastings MCMC, have been successfully employed for a number of different gravitational wave sources, these methods are essentially random walk algorithms. The Hamiltonian Monte–Carlo treats the inverse likelihood surface as a ‘gravitational potential’ and by introducing canonical positions and momenta, dynamically evolves the Markov chain by solving Hamilton?s equations of motion. This method is not as widely used as other MCMC algorithms due to the necessity of calculating gradients of the log-likelihood, which for most applications results in a bottleneck that makes the algorithm computationally prohibitive. We circumvent this problem by using accepted initial phase-space trajectory points to analytically fit for each of the individual gradients. Eliminating the waveform generation needed for the numerical derivatives reduces the total number of required templates for a {{10}^{6}} iteration chain from \\sim {{10}^{9}} to \\sim {{10}^{6}}. The result is in an implementation of the Hamiltonian Monte–Carlo that is faster, and more efficient by a factor of approximately the dimension of the parameter space, than a Hessian MCMC.

Porter, Edward K.; Carré, Jérôme

2014-07-01

165

Linear scaling electronic structure Monte Carlo method for metals

NASA Astrophysics Data System (ADS)

We present a method for sampling the Boltzmann distribution of a system in which the interionic interactions are derived from empirical or semiempirical electronic structure calculations within the Born-Oppenheimer approximation. We considerably improve on a scheme presented earlier [F. R. Krajewski and M. Parrinello, Phys. Rev. B 73, 041105(R) (2006)]. To this effect, we use an expression for the partition function in which electronic and ionic degrees of freedom are treated on the same footing. In addition, we introduce an auxiliary set of fields in such a way that the sampling of the partition function scales linearly with system size. We demonstrate the validity of this approach on tight-binding models of carbon nanotubes and silicon in its liquid and crystalline phases.

Krajewski, Florian R.; Parrinello, Michele

2007-06-01

166

A Monte Carlo implementation of the predictor-corrector Quasi-Static method

The Quasi-Static method (QS) is a useful tool for solving reactor transients since it allows for larger time steps when updating neutron distributions. Because of the beneficial attributes of Monte Carlo (MC) methods (exact geometries and continuous energy treatment), it is desirable to develop a MC implementation for the QS method. In this work, the latest version of the QS method known as the Predictor-Corrector Quasi-Static method is implemented. Experiments utilizing two energy-groups provide results that show good agreement with analytical and reference solutions. The method as presented can easily be implemented in any continuous energy, arbitrary geometry, MC code. (authors)

Hackemack, M. W.; Ragusa, J. C. [Department of Nuclear Engineering, Texas A and M University, 337 Zachry Engineering Building, College Station, TX 77843 (United States)] [Department of Nuclear Engineering, Texas A and M University, 337 Zachry Engineering Building, College Station, TX 77843 (United States); Griesheimer, D. P.; Pounders, J. M. [Bettis Atomic Laboratory, Bechtel Marine Propulsion Corporation, P.O. Box 79, West Mifflin, PA 15122 (United States)] [Bettis Atomic Laboratory, Bechtel Marine Propulsion Corporation, P.O. Box 79, West Mifflin, PA 15122 (United States)

2013-07-01

167

A coupled two-dimensional drift-diffusion and Monte Carlo analysis is developed to study the hot-electron-caused gate leakage current in Si n-MOSFETs. The electron energy distribution in a device is evaluated directly from a Monte Carlo model at low and intermediate electron energies. In the region of high electron energy, where the distribution function cannot be resolved by the Monte Carlo method

Chimoon Huang; Tahui Wang; C. N. Chen; M. C. Chang; J. Fu

1992-01-01

168

Calculation of exchange frequencies in bcc 3He with the path-integral Monte Carlo method

The exchange frequency in crystal 3He is calculated from first principles with a combination of path-integral Monte Carlo method and a method used in classical statistical mechanics to determine free-energy differences. The frequency of nearest-neighbor exchange at melting density is 0.46 mK, that of triple exchange is 0.19 mK, and that of four-particle planar exchange is 0.27 mK. These exchange

D. M. Ceperley; G. Jacucci

1987-01-01

169

This article presents differential equations and solution methods for the functions of the form $Q(x) = F^{-1}(G(x))$, where $F$ and $G$ are cumulative distribution functions. Such functions allow the direct recycling of Monte Carlo samples from one distribution into samples from another. The method may be developed analytically for certain special cases, and illuminate the idea that it is a

William T. Shaw; Thomas Luu; Nick Brickman

2009-01-01

170

Surface area estimation of digitized 3D objects using quasi-Monte Carlo methods

A novel and efficient quasi-Monte Carlo method for estimating the surface area of digitized 3D objects in the volumetric representation is presented. It operates directly on the original digitized objects without any surface reconstruction procedure. Based on the Cauchy–Crofton formula from integral geometry, the method estimates the surface area of a volumetric object by counting the number of intersection points

Yu-Shen Liu; Jing Yi; Hu Zhang; Guo-Qin Zheng; Jean-Claude Paul

2010-01-01

171

Fast breeder reactor neutronic propagation analysis by Monte-Carlo methods

The ability of MONTE-CARLO method to perform neutronic propagation in complex geometry is used to design shielding of pool\\u000a type fast breeder reactor.\\u000a \\u000a Main application concerns the neutronic propagation in the intermediate heat-exchanger to evaluate the secondary sodium activity.\\u000a \\u000a \\u000a The method is typically representative of PROPANE D formulaire options (Multigroup and PN developped cross-sections).

J. Cabrillat; G. Palmiotti; V. Rado; M. Salvatores

1985-01-01

172

Mean offset optimization for multi-patterning overlay using Monte Carlo simulation method

NASA Astrophysics Data System (ADS)

The overlay performance and alignment strategy optimization for a triple patterning (LELELE) were studied based on the Monte Carlo simulation method. The simulated results show that all of the combined or worst case overlay, alignment strategy, mean target of the upper level, and mean tolerance of the lower level are dependent on the means of the lower level. A dynamic mean control method is proposed to be integrated into the APC system to improve the overlay performance.

Wang, Wenhui; Cui, Liping; Sun, Lei; Kim, Ryoung-Han

2014-04-01

173

Multivariate Monte Carlo Methods for the Reflection Grating Spectrometers on XMM-Newton

We propose a novel multivariate Monte Carlo method as an efficient and\\u000aflexible approach to analyzing extended X-ray sources with the Reflection\\u000aGrating Spectrometer (RGS) on XMM Newton. A multi-dimensional interpolation\\u000amethod is used to efficiently calculate the response function for the RGS in\\u000aconjunction with an arbitrary spatially-varying spectral model. Several methods\\u000aof event comparison that effectively compare the

J. R. Peterson; J. G. Jernigan; S. M. Kahn

2004-01-01

174

An overview of spatial microscopic and accelerated kinetic Monte Carlo methods

The microscopic spatial kinetic Monte Carlo (KMC) method has been employed extensively in materials modeling. In this review\\u000a paper, we focus on different traditional and multiscale KMC algorithms, challenges associated with their implementation, and\\u000a methods developed to overcome these challenges. In the first part of the paper, we compare the implementation and computational\\u000a cost of the null-event and rejection-free microscopic

Abhijit Chatterjee; Dionisios G. Vlachos

2007-01-01

175

NASA Astrophysics Data System (ADS)

Variational wave functions used in the variational Monte Carlo (VMC) method are extensively improved to overcome the biases coming from the assumed variational form of the wave functions. We construct a highly generalized variational form by introducing a large number of variational parameters to the Gutzwiller-Jastrow factor as well as to the one-body part. Moreover, the projection operator to restore the symmetry of the wave function is introduced. These improvements enable to treat fluctuations with long-ranged as well as short-ranged correlations. A highly generalized wave function is implemented by the Pfaffians introduced by Bouchaud et al., together with the stochastic reconfiguration method introduced by Sorella for the parameter optimization. Our framework offers much higher accuracy for strongly correlated electron systems than the conventional variational Monte Carlo methods.

Tahara, Daisuke; Imada, Masatoshi

2008-11-01

176

Statistical error and optimal parameters of the test particle Monte Carlo method

NASA Astrophysics Data System (ADS)

The test particle Monte Carlo method for solving the linearized Boltzmann equation is considered. This method is used for simulation of a gas mixture flow when the concentration of one of the component is low. We study the errors of the method for three main macroparameters (density, velocity, and temperature). The new approach to construction of asymptotic confidence intervals for the estimates of velocity and temperature is proposed. The expressions for optimal selection of the number of grid nodes and the sample size which guarantee attaining a specified level of the error are proposed on the basis of the theory of functional Monte Carlo algorithms. The proposed approaches are examined on the examples of the classical problem of heat transfer between two parallel plates and the two-dimensional problem of a transversal supersonic flow of a rarefied binary gas mixture around a plate.

Plotnikov, M. Yu.; Shkarupa, E. V.

2012-11-01

177

Time-step limits for a Monte Carlo Compton-scattering method

Compton scattering is an important aspect of radiative transfer in high energy density applications. In this process, the frequency and direction of a photon are altered by colliding with a free electron. The change in frequency of a scattered photon results in an energy exchange between the photon and target electron and energy coupling between radiation and matter. Canfield, Howard, and Liang have presented a Monte Carlo method for simulating Compton scattering that models the photon-electron collision kinematics exactly. However, implementing their technique in multiphysics problems that include the effects of radiation-matter energy coupling typically requires evaluating the material temperature at its beginning-of-time-step value. This explicit evaluation can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and present time-step limits that avoid instabilities and nonphysical oscillations by considering a spatially independent, purely scattering radiative-transfer problem. Examining a simplified problem is justified because it isolates the effects of Compton scattering, and existing Monte Carlo techniques can robustly model other physics (such as absorption, emission, sources, and photon streaming). Our analysis begins by simplifying the equations that are solved via Monte Carlo within each time step using the Fokker-Planck approximation. Next, we linearize these approximate equations about an equilibrium solution such that the resulting linearized equations describe perturbations about this equilibrium. We then solve these linearized equations over a time step and determine the corresponding eigenvalues, quantities that can predict the behavior of solutions generated by a Monte Carlo simulation as a function of time-step size and other physical parameters. With these results, we develop our time-step limits. This approach is similar to our recent investigation of time discretizations for the Compton-scattering Fokker-Planck equation.

Densmore, Jeffery D [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Lowrie, Robert B [Los Alamos National Laboratory

2008-01-01

178

Monte Carlo Method for Predicting a Physically Based Drop Size Distribution Evolution of a Spray

NASA Astrophysics Data System (ADS)

We report in this paper a method for predicting the evolution of a physically based drop size distribution of a spray, by coupling the Maximum Entropy Formalism and the Monte Carlo scheme. Using the discrete or continuous population balance equation, a Mass Flow Algorithm is formulated taking into account interactions between droplets via coalescence. After deriving a kernel for coalescence, we solve the time dependent drop size distribution equation using a Monte Carlo method. We apply the method to the spray of a new print-head known as a Spray On Demand (SOD) device; the process exploits ultrasonic spray generation via a Faraday instability where the fluid/structure interaction causing the instability is described by a modified Hamilton's principle. This has led to a physically-based approach for predicting the initial drop size distribution within the framework of the Maximum Entropy Formalism (MEF): a three-parameter generalized Gamma distribution is chosen by using conservation of mass and energy. The calculation of the drop size distribution evolution by Monte Carlo method shows the effect of spray droplets coalescence both on the number-based or volume-based drop size distributions.

Tembely, Moussa; Lécot, Christian; Soucemarianadin, Arthur

2010-03-01

179

X-ray buildup factors of lead in broad beam geometry for energies from 15 to 150 keV are determined using the general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C). The obtained buildup factors data are fitted to a modified three parameter Archer et al. model for ease in calculating the broad beam transmission with computer at any tube potentials/filters combinations in diagnostic energies range. An example for their use to compute the broad beam transmission at 70, 100, 120, and 140 kVp is given. The calculated broad beam transmission is compared to data derived from literature, presenting good agreement. Therefore, the combination of the buildup factors data as determined and a mathematical model to generate x-ray spectra provide a computationally based solution to broad beam transmission for lead barriers in shielding x-ray facilities.

Kharrati, Hedi; Agrebi, Amel; Karaoui, Mohamed-Karim [Ecole Superieure des Sciences et Techniques de la Sante de Monastir, Avenue Avicenne 5000 Monastir (Tunisia); Faculte des Sciences de Monastir (Tunisia)

2007-04-15

180

NASA Astrophysics Data System (ADS)

A Monte Carlo (MC) method using bookkeeping strategy for population balance modeling of particulate processes has been designed in this article. With this method the evaluation of coagulation time step can be done precisely. In an effort to achieve the best computational efficiency, the MC program is implemented on a many-core graphic processing unit (GPU) after being fully parallelized. Useful rules for optimizing the MC code are also suggested. The computational accuracy of the MC scheme is then verified by comparing with a deterministic sectional-method. Eventually the computational efficiency of the MC method is investigated.

Wei, Jianming

2014-05-01

181

Path-integral Monte Carlo method for the local Z2 Berry phase.

We present a loop cluster algorithm Monte Carlo method for calculating the local Z(2) Berry phase of the quantum spin models. The Berry connection, which is given as the inner product of two ground states with different local twist angles, is expressed as a Monte Carlo average on the worldlines with fixed spin configurations at the imaginary-time boundaries. The "complex weight problem" caused by the local twist is solved by adopting the meron cluster algorithm. We present the results of simulation on the antiferromagnetic Heisenberg model on an out-of-phase bond-alternating ladder to demonstrate that our method successfully detects the change in the valence bond pattern at the quantum phase transition point. We also propose that the gauge-fixed local Berry connection can be an effective tool to estimate precisely the quantum critical point. PMID:23496453

Motoyama, Yuichi; Todo, Synge

2013-02-01

182

Path-integral Monte Carlo method for the local Z2 Berry phase

NASA Astrophysics Data System (ADS)

We present a loop cluster algorithm Monte Carlo method for calculating the local Z2 Berry phase of the quantum spin models. The Berry connection, which is given as the inner product of two ground states with different local twist angles, is expressed as a Monte Carlo average on the worldlines with fixed spin configurations at the imaginary-time boundaries. The “complex weight problem” caused by the local twist is solved by adopting the meron cluster algorithm. We present the results of simulation on the antiferromagnetic Heisenberg model on an out-of-phase bond-alternating ladder to demonstrate that our method successfully detects the change in the valence bond pattern at the quantum phase transition point. We also propose that the gauge-fixed local Berry connection can be an effective tool to estimate precisely the quantum critical point.

Motoyama, Yuichi; Todo, Synge

2013-02-01

183

Simulation of DNMR spectra using propagator formalism and Monte Carlo method.

A new program-ProMoCS-is presented for the simulation of dynamic nuclear magnetic resonance spectra. Its algorithm is based on the Monte Carlo method as the one of the previously introduced MC-DNMR but the theory of ProMoCS is explained by using the statistical approach of propagator formalism. Our new program is suitable for the calculation of dynamic NMR spectra of spin systems up to 12 1/2 spin nuclei, several conformers and any type of exchange between them. Mutual exchange of coupled spins can be simulated as well. While it keeps the main advantage of the Monte Carlo based method: calculation with significantly smaller matrices as compared with programs based on the simulation of the average density matrix, the maximum number of nuclei is increased significantly. Thus spectra of such systems can be simulated that was impossible previously. PMID:19121593

Szalay, Zsófia; Rohonczy, János

2009-03-01

184

Simulation of DNMR spectra using propagator formalism and Monte Carlo method

NASA Astrophysics Data System (ADS)

A new program—ProMoCS—is presented for the simulation of dynamic nuclear magnetic resonance spectra. Its algorithm is based on the Monte Carlo method as the one of the previously introduced MC-DNMR but the theory of ProMoCS is explained by using the statistical approach of propagator formalism. Our new program is suitable for the calculation of dynamic NMR spectra of spin systems up to 12 ½ spin nuclei, several conformers and any type of exchange between them. Mutual exchange of coupled spins can be simulated as well. While it keeps the main advantage of the Monte Carlo based method: calculation with significantly smaller matrices as compared with programs based on the simulation of the average density matrix, the maximum number of nuclei is increased significantly. Thus spectra of such systems can be simulated that was impossible previously.

Szalay, Zsófia; Rohonczy, János

2009-03-01

185

Implicit Monte Carlo methods and non-equilibrium Marshak wave radiative transport

Two enhancements to the Fleck implicit Monte Carlo method for radiative transport are described, for use in transparent and opaque media respectively. The first introduces a spectral mean cross section, which applies to pseudoscattering in transparent regions with a high frequency incident spectrum. The second provides a simple Monte Carlo random walk method for opaque regions, without the need for a supplementary diffusion equation formulation. A time-dependent transport Marshak wave problem of radiative transfer, in which a non-equilibrium condition exists between the radiation and material energy fields, is then solved. These results are compared to published benchmark solutions and to new discrete ordinate S-N results, for both spatially integrated radiation-material energies versus time and to new spatially dependent temperature profiles. Multigroup opacities, which are independent of both temperature and frequency, are used in addition to a material specific heat which is proportional to the cube of the temperature. 7 refs., 4 figs.

Lynch, J.E.

1985-01-01

186

Order-N cluster Monte Carlo method for spin systems with long-range interactions

An efficient O(N) cluster Monte Carlo method for Ising models with long-range interactions is presented. Our novel algorithm does not introduce any cutoff for interaction range and thus it strictly fulfills the detailed balance. The realized stochastic dynamics is equivalent to that of the conventional Swendsen-Wang algorithm, which requires O(N{sup 2}) operations per Monte Carlo sweep if applied to long-range interacting models. In addition, it is shown that the total energy and the specific heat can also be measured in O(N) time. We demonstrate the efficiency of our algorithm over the conventional method and the O(NlogN) algorithm by Luijten and Bloete. We also apply our algorithm to the classical and quantum Ising chains with inverse-square ferromagnetic interactions, and confirm in a high accuracy that a Kosterlitz-Thouless phase transition, associated with a universal jump in the magnetization, occurs in both cases.

Fukui, Kouki [Department of Applied Physics, University of Tokyo, 7-3-1 Hongo, Tokyo 113-8656 (Japan); Todo, Synge [Department of Applied Physics, University of Tokyo, 7-3-1 Hongo, Tokyo 113-8656 (Japan); CREST, Japan Science and Technology Agency, Kawaguchi 332-0012 (Japan)], E-mail: wistaria@ap.t.u-tokyo.ac.jp

2009-04-20

187

NASA Astrophysics Data System (ADS)

The way of measuring diameter by use of measuring bow height and chord length is commonly adopted for the large diameter work piece. In the process of computing the diameter of large work piece, measurement uncertainty is an important parameter and is always employed to evaluate the reliability of the measurement results. Therefore, it is essential to present reliable methods to evaluate the measurement uncertainty, especially in precise measurement. Because of the limitations of low convergence and unstable results of the Monte-Carlo (MC) method, the quasi-Monte-Carlo (QMC) method is used to estimate the measurement uncertainty. The QMC method is an improvement of the ordinary MC method which employs highly uniform quasi random numbers to replace MC's pseudo random numbers. In the process of evaluation, first, more homogeneous random numbers (quasi random numbers) are generated based on Halton's sequence. Then these random numbers are transformed into the desired distribution random numbers. The desired distribution random numbers are used to simulate the measurement errors. By computing the simulation results, measurement uncertainty can be obtained. An experiment of cylinder diameter measurement and its uncertainty evaluation are given. In the experiment, the guide to the expression of uncertainty in measurement method, MC method, and QMC method are validated. The result shows that the QMC method has a higher convergence rate and more stable evaluation results than that of the MC method. Therefore, the QMC method can be applied effectively to evaluate the measurement uncertainty.

Jing, Hui; Li, Cong; Kuang, Bing; Huang, Meifa; Zhong, Yanru

2012-09-01

188

A dense hydrogen plasma modeled by the path integral-Monte Carlo method

An approach to the exact description of exchange in disordered quantum systems at finite temperatures is formulated in terms\\u000a of Feynman path integrals, which eliminates rigid restrictions on the number of particles and allows numerical simulation\\u000a of the equilibrium characteristics of the electron component of a dense plasma to be performed by the Monte Carlo method.\\u000a The combinatorial weight factors

S. V. Shevkunov

2002-01-01

189

Application of the vector Monte-Carlo method in polarisation optical coherence tomography

The vector Monte-Carlo method is developed and applied to polarisation optical coherence tomography. The basic principles of simulation of the propagation of polarised electromagnetic radiation with a small coherence length are considered under conditions of multiple scattering. The results of numerical simulations for Rayleigh scattering well agree with the Milne solution generalised to the case of an electromagnetic field and with theoretical calculations in the diffusion approximation. (special issue devoted to multiple radiation scattering in random media)

Churmakov, D Yu [Cranfield Health, Cranfield University, Silsoe (United Kingdom); Kuz'min, V L [Saint Petersburg Institute of Commerce and Economics (Russian Federation); Meglinskii, I V [Department of Physics, N G Chernyshevskii Saratov State University, Saratov (Russian Federation)

2006-11-30

190

Monte Carlo methods for the nuclear shell model and their applications

NASA Astrophysics Data System (ADS)

Shell model quantum Monte Carlo methods are applied to calculate a variety of nuclear properties, in particular for nuclei in the iron region. The methods are based on a decomposition of the many-body propagator as a superposition of one-body propagators of non-interacting nucleons moving in fluctuating auxiliary fields, and can be applied in very large model spaces. Various projection techniques are developed to study the dependence of nuclear properties on good quantum numbers such as parity and spin. The particle-number reprojection method enables us to calculate thermal observables of several nuclei using the Monte Carlo sampling for one nucleus only. Nuclear level densities calculated by this method agree remarkably well with experimental data without any adjustable parameters. Parity-projected Monte Carlo calculations indicate a significant parity dependence of level densities even at the neutron-resonance energies. A simple quasi-particle model is developed to explain this parity dependence. Spin distributions of level densities are studied using the spin projection technique. The Monte Carlo results are compared with the spin-cutoff model and used to extract an energy-dependent moment of inertia. The strong suppression in the moment of inertia of even-even nuclei correlates with pairing effects and is explained by a cranking model. Thermal signatures of the pairing transition are found in the heat capacity of even-even neutron-rich iron isotopes. New commutator techniques are developed to calculate low-order moments of strength functions, and are applied in the study of electromagnetic strength functions in iron-region nuclei.

Liu, Shichang

2001-09-01

191

A Monte Carlo method for quantum spins using boson world lines

A new Monte Carlo method is described for quantum (s= 1/2) spins which maps the spin model onto a model of hard-core bosons. The Hamiltonian is then broken up into kinetic and potential parts and the Trotter formula used to simulate the Bose system. The power of this mapping comes from the fact that, by letting the system evolve through unphysical spin states between imaginary time slices, the needed matrix elements have simple expressions.

Loh, E.

1986-06-01

192

Simulation of DNMR spectra using propagator formalism and Monte Carlo method

A new program—ProMoCS—is presented for the simulation of dynamic nuclear magnetic resonance spectra. Its algorithm is based on the Monte Carlo method as the one of the previously introduced MC-DNMR but the theory of ProMoCS is explained by using the statistical approach of propagator formalism. Our new program is suitable for the calculation of dynamic NMR spectra of spin systems

Zsófia Szalay; János Rohonczy

2009-01-01

193

Spectral effective emissivities of nonisothermal cavities calculated by the Monte Carlo method

NASA Astrophysics Data System (ADS)

An algorithm based on the Monte Carlo method is described that permits the precise calculation of radiant emission characteristics of nonisothermal blackbody cavities for use as standard sources in radiometry, photometry, and radiation thermometry. The algorithm is realized for convex axisymmetric specular-diffuse cavities formed by three conical surfaces. The numerical experiments provide estimates of normal effective emissivities of cylindrical blackbody cavities with flat or conical bottoms for various axisymmetric temperature distributions on the cavity walls.

Sapritsky, V. I.; Prokhorov, A. V.

1995-09-01

194

Simple Monte-Carlo method to calibrate well-type HPGe detectors

The activity concentration of radionuclides in the environment is routinely measured using axial high-purity germanium (HPGe) detectors. For low-activity samples, well-type detectors are often chosen for their high efficiency. These detectors are usually calibrated with specific radionuclides that do not allow a generalization to other sources. While Monte-Carlo methods have been used for calibration purposes, they usually need extensive and

François Bochud; Claude J. Bailat; Thierry Buchillier; François Byrde; Ernst Schmid; Jean-Pascal Laedermann

2006-01-01

195

Monte Carlo Method for Spin Models with Long-Range Interactions

We introduce a Monte Carlo method for the simulation of spin models with ferromagnetic long-range interactions in which the amount of time per spin-flip operation is independent of the system size, in spite of the fact that the interactions between each spin and all other spins are taken into account. We work out two algorithms for the q-state Potts model

Erik Luijten; Henk W. J. Blöte

1995-01-01

196

Kinetic Monte Carlo method for rule-based modeling of biochemical networks

We present a kinetic Monte Carlo method for simulating chemical\\u000atransformations specified by reaction rules, which can be viewed as generators\\u000aof chemical reactions, or equivalently, definitions of reaction classes. A rule\\u000aidentifies the molecular components involved in a transformation, how these\\u000acomponents change, conditions that affect whether a transformation occurs, and\\u000aa rate law. The computational cost of the

Jin Yang; Michael I. Monine; James R. Faeder; William S. Hlavacek

2008-01-01

197

Modeling of HF chemical laser flowfields using the Direct Simulation Monte Carlo method

A methodology, based on the Direct Simulation Monte Carlo (DSMC) approach, has been developed to screen injector concepts for high-energy chemical lasers. This methodology involves modeling the associated complex three-dimensional, reacting, multispecies flowfields and has been validated by comparison with experimental measurements. The method enables screening of new high-performance injector concepts and has the potential of greatly minimizing idea-to-implementation time and cost. 5 refs.

McGregor, R.D.; Haflinger, D.E.; Lohn, P.D.; Sollee, J.L.; Behrens, H.W.; Duncan, W.A. (TRW Space AND Technology Group, Redondo Beach, CA (United States) U.S. Army, Missile Command, Redstone Arsenal, Al (United States))

1992-07-01

198

Monte Carlo Methods in Materials Science Based on FLUKA and ROOT

NASA Technical Reports Server (NTRS)

A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the CERN ALICE (A Large Ion Collisions Experiment) software team through an adaptation of their existing AliROOT (ALICE Using ROOT) architecture. In order to check our progress against actual data, we have chosen to simulate the ATIC14 (Advanced Thin Ionization Calorimeter) cosmic-ray astrophysics balloon payload as well as neutron fluences in the Mir spacecraft. This paper contains a summary of status of this project, and a roadmap to its successful completion.

Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor

2003-01-01

199

Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy

NASA Astrophysics Data System (ADS)

Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.

Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas

2012-04-01

200

Virtual CMM using Monte Carlo methods based on frequency content of the error signal

NASA Astrophysics Data System (ADS)

In coordinate measurement metrology, assessment of the measurement uncertainty of a particular measurement is not a straight forward task. A feasible way for calculation of the measurement uncertainty seems to be the use of a Monte Carlo method. In recent years, a number of Monte Carlo methods have been developed for this purpose, we have developed a Monte Carlo method that can be used on CMM's that takes into account, among other factors, the auto correlation of the error signal. We have separated the errors in linearity errors, rotational errors, straightness errors and squareness errors. Special measurement tools have been developed and applied to measure the required parameters. The short-wave as well as the long-wave behavior of the errors of a specific machine have been calibrated. A machine model that takes these effects into account is presented here. The relevant errors of a Zeiss Prismo were measured, and these data were used to calculate the measurement uncertainty of a measurement of a ring gauge. These calculations were compared to real measurements.

van Dorp, Bas W.; Haitjema, Han; Delbressine, Frank; Bergmans, Robbert H.; Schellekens, Piet H. J.

2001-10-01

201

NASA Astrophysics Data System (ADS)

We analyze the line radiative transfer in protoplanetary disks using several approximate methods and a well-tested accelerated Monte Carlo code. A low-mass flaring disk model with uniform as well as stratified molecular abundances is adopted. Radiative transfer in low and high rotational lines of CO, C18O, HCO+, DCO+, HCN, CS, and H2CO is simulated. The corresponding excitation temperatures, synthetic spectra, and channel maps are derived and compared to the results of the Monte Carlo calculations. A simple scheme that describes the conditions of the line excitation for a chosen molecular transition is elaborated. We find that the simple LTE approach can safely be applied for the low molecular transitions only, while it significantly overestimates the intensities of the upper lines. In contrast, the full escape probability (FEP) approximation can safely be used for the upper transitions (Jup>~3), but it is not appropriate for the lowest transitions because of the maser effect. In general, the molecular lines in protoplanetary disks are partly subthermally excited and require more sophisticated approximate line radiative transfer methods. We analyze a number of approximate methods, namely, LVG, vertical escape probability (VEP), and vertical one ray (VOR) and discuss their algorithms in detail. In addition, two modifications to the canonical Monte Carlo algorithm that allow a significant speed up of the line radiative transfer modeling in rotating configurations by a factor of 10-50 are described.

Pavlyuchenkov, Ya.; Semenov, D.; Henning, Th.; Guilloteau, St.; Piétu, V.; Launhardt, R.; Dutrey, A.

2007-11-01

202

Methods for Monte Carlo simulation of the exospheres of the moon and Mercury

NASA Technical Reports Server (NTRS)

A general form of the integral equation of exospheric transport on moon-like bodies is derived in a form that permits arbitrary specification of time varying physical processes affecting atom creation and annihilation, atom-regolith collisions, adsorption and desorption, and nonplanetocentric acceleration. Because these processes usually defy analytic representation, the Monte Carlo method of solution of the transport equation, the only viable alternative, is described in detail, with separate discussions of the methods of specification of physical processes as probabalistic functions. Proof of the validity of the Monte Carlo exosphere simulation method is provided in the form of a comparison of analytic and Monte Carlo solutions to three classical, and analytically tractable, exosphere problems. One of the key phenomena in moonlike exosphere simulations, the distribution of velocities of the atoms leaving a regolith, depends mainly on the nature of collisions of free atoms with rocks. It is shown that on the moon and Mercury, elastic collisions of helium atoms with a Maxwellian distribution of vibrating, bound atoms produce a nearly Maxwellian distribution of helium velocities, despite the absence of speeds in excess of escape in the impinging helium velocity distribution.

Hodges, R. R., Jr.

1980-01-01

203

A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model) for deep penetration problems such as examined in this paper. In this research, we investigate the application of a variant of the hybrid Monte Carlo-deterministic method proposed by Cooper and Larsen to global deep penetration problems involving binary stochastic media. To our knowledge, hybrid Monte Carlo-deterministic methods have not previously been applied to problems involving a stochastic medium. We investigate two approaches for computing the approximate deterministic estimate of the forward scalar flux distribution used to automatically generate the weight windows. The first approach uses the atomic mix approximation to the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. The second approach uses the Levermore-Pomraning model for the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. In both cases, we use Monte Carlo Algorithm B with weight windows automatically generated from the approximate forward scalar flux distribution to obtain the solution of the transport problem.

Keady, K P; Brantley, P

2010-03-04

204

A general method for spatially coarse-graining Metropolis Monte Carlo simulations onto a lattice.

A recently introduced method for coarse-graining standard continuous Metropolis Monte Carlo simulations of atomic or molecular fluids onto a rigid lattice of variable scale [X. Liu, W. D. Seider, and T. Sinno, Phys. Rev. E 86, 026708 (2012)] is further analyzed and extended. The coarse-grained Metropolis Monte Carlo technique is demonstrated to be highly consistent with the underlying full-resolution problem using a series of detailed comparisons, including vapor-liquid equilibrium phase envelopes and spatial density distributions for the Lennard-Jones argon and simple point charge water models. In addition, the principal computational bottleneck associated with computing a coarse-grained interaction function for evolving particle positions on the discretized domain is addressed by the introduction of new closure approximations. In particular, it is shown that the coarse-grained potential, which is generally a function of temperature and coarse-graining level, can be computed at multiple temperatures and scales using a single set of free energy calculations. The computational performance of the method relative to standard Monte Carlo simulation is also discussed. PMID:23534624

Liu, Xiao; Seider, Warren D; Sinno, Talid

2013-03-21

205

A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure. PMID:17510203

Takahashi, F; Endo, A

2007-01-01

206

Analysis of single Monte Carlo methods for prediction of reflectance from turbid media

Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/.

Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan

2011-01-01

207

Analysis of single Monte Carlo methods for prediction of reflectance from turbid media.

Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904

Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan

2011-09-26

208

Application of Monte Carlo method for analyzing reference involute's measurement accuracy

NASA Astrophysics Data System (ADS)

In all accuracy indexes of gears, measurement of involute tooth profile's error is a difficult technical problem. A doubledisc instrument for measuring involute is introduced, which can be used to measure reference involute. The main errors sources of the instrument are analyzed, and all measurement errors are simulated by Mont-Carlo method. After analyzing error sequence, the double disc instrument's measurement uncertainty is 0.45?m when measuring gear (m=4, z=30, ?=20°), which is less than the value evaluated by GUM method, and more reliable and accurate.

Lou, Zhifeng; Wang, Liding

2010-08-01

209

An overview of spatial microscopic and accelerated kinetic Monte Carlo methods

NASA Astrophysics Data System (ADS)

The microscopic spatial kinetic Monte Carlo (KMC) method has been employed extensively in materials modeling. In this review paper, we focus on different traditional and multiscale KMC algorithms, challenges associated with their implementation, and methods developed to overcome these challenges. In the first part of the paper, we compare the implementation and computational cost of the null-event and rejection-free microscopic KMC algorithms. A firmer and more general foundation of the null-event KMC algorithm is presented. Statistical equivalence between the null-event and rejection-free KMC algorithms is also demonstrated. Implementation and efficiency of various search and update algorithms, which are at the heart of all spatial KMC simulations, are outlined and compared via numerical examples. In the second half of the paper, we review various spatial and temporal multiscale KMC methods, namely, the coarse-grained Monte Carlo (CGMC), the stochastic singular perturbation approximation, and the ?-leap methods, introduced recently to overcome the disparity of length and time scales and the one-at-a time execution of events. The concepts of the CGMC and the ?-leap methods, stochastic closures, multigrid methods, error associated with coarse-graining, a posteriori error estimates for generating spatially adaptive coarse-grained lattices, and computational speed-up upon coarse-graining are illustrated through simple examples from crystal growth, defect dynamics, adsorption desorption, surface diffusion, and phase transitions.

Chatterjee, Abhijit; Vlachos, Dionisios G.

2007-07-01

210

Sequential Monte Carlo estimation on point processes has been successfully applied to predict the movement from neural activity. However, there exist some issues along with this method such as the simplified tuning model and the high computational complexity, which may degenerate the decoding performance of motor brain machine interfaces. In this paper, we adopt a general tuning model which takes recent ensemble activity into account. The goodness-of-fit analysis demonstrates that the proposed model can predict the neuronal response more accurately than the one only depending on kinematics. A new sequential Monte Carlo algorithm based on the proposed model is constructed. The algorithm can significantly reduce the root mean square error of decoding results, which decreases 23.6% in position estimation. In addition, we accelerate the decoding speed by implementing the proposed algorithm in a massive parallel manner on GPU. The results demonstrate that the spike trains can be decoded as point process in real time even with 8000 particles or 300 neurons, which is over 10 times faster than the serial implementation. The main contribution of our work is to enable the sequential Monte Carlo algorithm with point process observation to output the movement estimation much faster and more accurately.

Wang, Fang; Liao, Yuxi; Zheng, Xiaoxiang

2014-01-01

211

NASA Astrophysics Data System (ADS)

In this paper we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as ?t?? as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method can avoid the nonphysical overheating that occurs in standard IMC when the time step is large. The method also leads to decreased noise in the material temperature at the cost of a potential increase in the radiation temperature noise.

McClarren, Ryan G.; Urbatsch, Todd J.

2009-09-01

212

Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas

Highlights: •A general approach to Monte Carlo methods for multicomponent plasmas is proposed. •We show numerical tests for the two-component (electrons and ions) case. •An optimal choice of parameters for speeding up the computations is discussed. •A rigorous estimate of the error of approximation is proved. -- Abstract: A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau–Fokker–Planck equations by Boltzmann equations of quasi-Maxwellian kind. It means that the total collision frequency for the corresponding Boltzmann equation does not depend on the velocities. This allows to make the simulation process very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes the well-known methods of Takizuka and Abe (1977) [12] and Nanbu (1997) as particular cases, and generalizes the approach of Bobylev and Nanbu (2000). The numerical scheme of this paper is simpler than the schemes by Takizuka and Abe [12] and by Nanbu. We derive it for the general case of multicomponent plasmas and show some numerical tests for the two-component (electrons and ions) case. An optimal choice of parameters for speeding up the computations is also discussed. It is also proved that the order of approximation is not worse than O(?(?)), where ? is a parameter of approximation being equivalent to the time step ?t in earlier methods. A similar estimate is obtained for the methods of Takizuka and Abe and Nanbu.

Bobylev, A.V., E-mail: alexander.bobylev@kau.se [Department of Mathematics, Karlstad University, SE-65188 Karlstad (Sweden); Potapenko, I.F., E-mail: firena@yandex.ru [Keldysh Institute for Applied Mathematics, RAS, 125047 Moscow (Russian Federation)

2013-08-01

213

In this note we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as {Delta}t{yields}{infinity} as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method alleviates both the nonphysical overheating that occurs in standard IMC when the time step is large and significantly diminishes the statistical noise in the solution.

Mcclarren, Ryan G [Los Alamos National Laboratory; Urbatsch, Todd J [Los Alamos National Laboratory

2008-01-01

214

Path-integral-expanded-ensemble Monte Carlo method in treatment of the sign problem for fermions.

Expanded-ensemble Monte Carlo method with Wang-Landau algorithm was used for calculations of the ratio of partition functions for classes of permutations in the problem of several interacting quantum particles (fermions) in an external field. Simulations for systems consisting of 2 up to 7 interacting particles in harmonic or Coulombic field were performed. The presented approach allows one to carry out calculations for low enough temperatures that makes it possible to extract data for the ground-state energy and low-temperature thermodynamics. PMID:20365297

Voznesenskiy, M A; Vorontsov-Velyaminov, P N; Lyubartsev, A P

2009-12-01

215

NASA Astrophysics Data System (ADS)

The dissertation concerns the extraction, via signal processing, of structural information from the scattering of low megahertz, low power ultrasonic waves in two specific media of great practical interest--fiber reinforced composites and soft biological tissue. In fiber reinforced composites, this work represents the first measurement of second-order statistics in porous laminates, and the first application of Monte Carlo methods to acoustical scattering in composites. A numerical model of porous composites backscatter was derived which is suitable for direct numerical implementation. The model treats the total backscattered field as the result of a two-mode scattering process. In the first mode, the void-free composite is treated as a continuously varying medium in which the density and compressibility are functions of position. The second mode is the distribution of gas voids that failed to escape the material before gel, and are dealt with as discrete Rayleigh scatterers. Convolution techniques were developed that allowed the numerical model to reproduce the long range order seen in the void-free composite. The results of the Monte Carlo derivation were coded, and simulations run with data sets that duplicate the properties of the composite samples used in the study. In the area of tissue characterization, two leading methods have been proposed to extract structural data from the raw backscattered waveforms. Both techniques were developed from an understanding of the periodicities created by semi-regularly spaced, coherent scatterers. In the second half of the dissertation, a complete analytical and numerical treatment of these two techniques was done from a first principles approach. Computer simulations were performed to determine the general behavior of the algorithms. The main focus is on the envelope correlation spectrum, or ECS. Monte Carlo methods were employed to examine the signal-to-noise ratio of the ECS in terms of the variances of the backscattered amplitude, phase, spacing, and pulse broadening due to attenuation. Extensive Monte Carlo methods have quantified performance limits of the ECS by calculating the limits on the S/N ratio in terms of the randomization of scattering parameters. A new result is that the S/N ratio was found to be controlled by variance of the scattering amplitude and spatial separation of the scattering centers. (Abstract shortened by UMI.).

Grolemund, Daniel Lee

216

NASA Astrophysics Data System (ADS)

For complex multidimensional systems, Monte Carlo methods are useful for sampling probable regions of a configuration space and, in the context of annealing, for determining ``low energy'' or ``high scoring'' configurations. Such methods have been used in protein design as means to identify amino acid sequences that are energetically compatible with a particular backbone structure. As with many other applications of Monte Carlo methods, such searches can be inefficient if trial configurations (protein sequences) in the Markov chain are chosen randomly. Here a mean-field biased Monte Carlo method (MFBMC) is presented and applied to designing and sampling protein sequences. The MFBMC method uses predetermined sequence identity probabilities wi(?) to bias the sequence selection. The wi(?) are calculated using a self-consistent, mean-field theory that can estimate the number and composition of sequences having predetermined values of energetically related foldability criteria. The MFBMC method is applied to both a simple protein model, the 27-mer lattice model, and an all-atom protein model. Compared to conventional Monte Carlo (MC) and configurational bias Monte Carlo (BMC), the MFBMC method converges faster to low energy sequences and samples such sequences more efficiently. The MFBMC method also tolerates faster cooling rates than the MC and BMC methods. The MFBMC method can be applied not only to protein sequence search, but also to a wide variety of polymeric and condensed phase systems.

Zou, Jinming; Saven, Jeffery G.

2003-02-01

217

Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy

NASA Astrophysics Data System (ADS)

Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.

Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui

2014-06-01

218

NASA Astrophysics Data System (ADS)

The shell model Monte Carlo (SMMC) method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i) a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii) a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes 59-64Ni and of a heavy deformed rare-earth nucleus 162Dy and found them to be in close agreement with various experimental data sets.

Alhassid, Y.; Bonett-Matiz, M.; Liu, S.; Mukherjee, A.; Nakada, H.

2014-04-01

219

In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amount of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)

Hunter, J. L. [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave., 24-107, Cambridge, MA 02139 (United States)] [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave., 24-107, Cambridge, MA 02139 (United States); Sutton, T. M. [Knolls Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P. O. Box 1072, Schenectady, NY 12301-1072 (United States)] [Knolls Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P. O. Box 1072, Schenectady, NY 12301-1072 (United States)

2013-07-01

220

Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method

NASA Astrophysics Data System (ADS)

Diffusion Monte Carlo is a highly accurate Quantum Monte Carlo method for electronic structure calculations of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency on parallel machines. This step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm: a simple renumbering of the MPI ranks based on proximity and a space filling curve significantly improves the MPI Allgather performance. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL show up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that require load balancing.

Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.

2013-02-01

221

Geostatistical approach to bayesian inversion of geophysical data: Markov chain Monte Carlo method

NASA Astrophysics Data System (ADS)

This paper presents a practical and objective procedure for a Bayesian inversion of geophysical data. We have applied geostatistical techniques such as kriging and simulation algorithms to acquire a prior model information. Then the Markov chain Monte Carlo (MCMC) method is adopted to infer the characteristics of the marginal distributions of model parameters. Geostatistics which is based upon a variogram model provides a means to analyze and interpret the spatially distributed data. For Bayesian inversion of dipole-dipole resistivity data, we have used the indicator kriging and simulation techniques to generate cumulative density functions from Schlumberger and well logging data for obtaining a prior information by cokriging and simulations from covariogram models. Indicator approaches make it possible to incorporate non-parametric information into the probabilistic density function. We have also adopted the Markov chain Monte Carlo approach, based on Gibbs sampling, to examine the characteristics of a posterior probability density function and marginal distributions of each parameter. The MCMC technique provides a robust result from which information given by the indicator method, that is fundamentally non-parametric, is fully extracted. We have used the a prior information proposed by the geostatistical method as the full conditional distribution for Gibbs sampling. And to implement Gibbs sampler, we have applied the modified Simulated Annealing (SA) algorithm which effectively searched for global model space. This scheme provides a more effective and robust global sampling algorithm as compared to the previous study.

Oh, S.-H.; Kwon, B.-D.

2001-08-01

222

An off-lattice, self-learning kinetic Monte Carlo method using local environments

NASA Astrophysics Data System (ADS)

We present a method called local environment kinetic Monte Carlo (LE-KMC) method for efficiently performing off-lattice, self-learning kinetic Monte Carlo (KMC) simulations of activated processes in material systems. Like other off-lattice KMC schemes, new atomic processes can be found on-the-fly in LE-KMC. However, a unique feature of LE-KMC is that as long as the assumption that all processes and rates depend only on the local environment is satisfied, LE-KMC provides a general algorithm for (i) unambiguously describing a process in terms of its local atomic environments, (ii) storing new processes and environments in a catalog for later use with standard KMC, and (iii) updating the system based on the local information once a process has been selected for a KMC move. Search, classification, storage and retrieval steps needed while employing local environments and processes in the LE-KMC method are discussed. The advantages and computational cost of LE-KMC are discussed. We assess the performance of the LE-KMC algorithm by considering test systems involving diffusion in a submonolayer Ag and Ag-Cu alloy films on Ag(001) surface.

Konwar, Dhrubajit; Bhute, Vijesh J.; Chatterjee, Abhijit

2011-11-01

223

HRMC: Hybrid Reverse Monte Carlo method with silicon and carbon potentials

NASA Astrophysics Data System (ADS)

Fortran 77 code is presented for a hybrid method of the Metropolis Monte Carlo (MMC) and Reverse Monte Carlo (RMC) for the simulation of amorphous silicon and carbon structures. In additional to the usual constraints of the pair correlation functions and average coordination, the code also incorporates an optional energy constraint. This energy constraint is in the form of either the Environment Dependent Interatomic Potential (applicable to silicon and carbon) and the original and modified Stillinger-Weber potentials (applicable to silicon). The code also allows porous systems to be modeled via a constraint on porosity and internal surface area using a novel restriction on the available simulation volume. Program summaryProgram title: HRMC version 1.0 Catalogue identifier: AEAO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 200 894 No. of bytes in distributed program, including test data, etc.: 907 557 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: Any computer capable of running executables produced by the g77 Fortran compiler Operating system: Unix, Windows RAM: Depends on the type of empirical potential use, number of atoms and which constraints are employed Classification: 7.7 Nature of problem: Atomic modeling using empirical potentials and experimental data Solution method: Monte Carlo Additional comments: The code is not standard FORTRAN 77 but includes some additional features and therefore generates errors when compiled using the Nag95 compiler. It does compile successfully with the GNU g77 compiler ( http://www.gnu.org/software/fortran/fortran.html). Running time: Depends on the type of empirical potential use, number of atoms and which constraints are employed. The test included in the distribution took 37 minutes on a DEC Alpha PC.

Opletal, G.; Petersen, T. C.; O'Malley, B.; Snook, I. K.; McCulloch, D. G.; Yarovsky, I.

2008-05-01

224

Sequential Monte Carlo Method for Real-time Tracking of Multiple Targets.

National Technical Information Service (NTIS)

In this project, a Monte Carlo approach to tracking was developed for tracking in cluttered environments and across multiple scales. The Monte Carlo approach was compared with an active contour approach. Specifically, we developed a novel deterministic ap...

B. Li S. T. Acton

2010-01-01

225

Uncertainty analysis using Monte Carlo method in the measurement of phase by ESPI

A method for simultaneously measuring whole field in-plane displacements by using optical fiber and based on the dual-beam illumination principle electronic speckle pattern interferometry (ESPI) is presented in this paper. A set of single mode optical fibers and beamsplitter are employed to split the laser beam into four beams of equal intensity.One pair of fibers is utilized to illuminate the sample in the horizontal plane so it is sensitive only to horizontal in-plane displacement. Another pair of optical fibers is set to be sensitive only to vertical in-plane displacement. Each pair of optical fibers differs in longitude to avoid unwanted interference. By means of a Fourier-transform method of fringe-pattern analysis (Takeda method), we can obtain the quantitative data of whole field displacements. We found the uncertainty associated with the phases by mean of Monte Carlo-based technique.

Anguiano Morales, Marcelino; Martinez, Amalia; Rayas, J. A. [Centro de Investigaciones en Optica A. C. Apartado Postal 1-948, 37000 Leon (Mexico); Cordero, Raul R. [Leibniz Universitaet Hannover, Herrenhaeuser Str. 2, D-30419 Hannover (Germany)

2008-04-15

226

NASA Astrophysics Data System (ADS)

We present a non-dynamic atomistic Monte Carlo (MC) methodology for simulating polymeric systems beyond equilibrium by expanding the statistical ensemble to include a tensorial variable accounting for the overall structure of the system subjected to flow. The new method, GENERIC (General Equation for the Nonequilibrium Reversible-Irreversible Coupling) MC [1-3], proposes MC simulations in a generalized ensemble incorporating properly defined thermodynamic fields or Lagrange multipliers, and is capable of exciting the overall conformation of polymeric materials in exactly the same way as an imposed flow field in a dynamic (e.g., nonequilibrium molecular dynamics, NEMD) method. The new method provides us with invaluable information for the `true' free energy of deformed polymer melts; this it can guide us in the development of more accurate viscoelastic models.

Baig, Chunggi; Mavrantzas, Vlasis G.

2008-07-01

227

Uncertainty analysis using Monte Carlo method in the measurement of phase by ESPI

NASA Astrophysics Data System (ADS)

A method for simultaneously measuring whole field in-plane displacements by using optical fiber and based on the dual-beam illumination principle electronic speckle pattern interferometry (ESPI) is presented in this paper. A set of single mode optical fibers and beamsplitter are employed to split the laser beam into four beams of equal intensity. One pair of fibers is utilized to illuminate the sample in the horizontal plane so it is sensitive only to horizontal in-plane displacement. Another pair of optical fibers is set to be sensitive only to vertical in-plane displacement. Each pair of optical fibers differs in longitude to avoid unwanted interference. By means of a Fourier-transform method of fringe-pattern analysis (Takeda method), we can obtain the quantitative data of whole field displacements. We found the uncertainty associated with the phases by mean of Monte Carlo-based technique.

Anguiano Morales, Marcelino; Martínez, Amalia; Rayas, J. A.; Cordero, Raúl R.

2008-04-01

228

NASA Astrophysics Data System (ADS)

We investigate Monte Carlo simulation strategies for determining the effective (``depletion'') potential between a pair of hard spheres immersed in a dense sea of much smaller hard spheres. Two routes to the depletion potential are considered. The first is based on estimates of the insertion probability of one big sphere in the presence of the other; we describe and compare three such methods. The second route exploits collective (cluster) updating to sample the depletion potential as a function of the separation of the big particles; we describe two such methods. For both routes, we find that the sampling efficiency at high densities of small particles can be enhanced considerably by exploiting ``geometrical shortcuts'' that focus the computational effort on a subset of small particles. All the methods we describe are readily extendable to particles interacting via arbitrary potentials.

Ashton, D. J.; Sánchez-Gil, V.; Wilding, N. B.

2013-10-01

229

A Deterministic-Monte Carlo Hybrid Method for Time-Dependent Neutron Transport Problems

A new deterministic-Monte Carlo hybrid solution technique is derived for the time-dependent transport equation. This new approach is based on dividing the time domain into a number of coarse intervals and expanding the transport solution in a series of polynomials within each interval. The solutions within each interval can be represented in terms of arbitrary source terms by using precomputed response functions. In the current work, the time-dependent response function computations are performed using the Monte Carlo method, while the global time-step march is performed deterministically. This work extends previous work by coupling the time-dependent expansions to space- and angle-dependent expansions to fully characterize the 1D transport response/solution. More generally, this approach represents and incremental extension of the steady-state coarse-mesh transport method that is based on global-local decompositions of large neutron transport problems. An example of a homogeneous slab is discussed as an example of the new developments.

Justin Pounders; Farzad Rahnema

2001-10-01

230

Application of scalar Monte Carlo probability density function method for turbulent spray flames

The objective of the present work is twofold: (1) extend the coupled Monte Carlo probability density function (PDF)/computational fluid dynamics (CFD) computations to the modeling of turbulent spray flames, and (2) extend the PDF/SPRAY/CFD module to parallel computing in order to facilitate large-scale combustor computations. In this approach, the mean gas phase velocity and turbulence fields are determined from a standard turbulence model, the joint composition of species and enthalpy from the solution of a modeled PDF transport equation, and a Lagrangian-based dilute spray model is used for the liquid-phase representation. The PDF transport equation is solved by a Monte Carlo method, and the mean gas phase velocity and turbulence fields together with the liquid phase equations are solved by existing state-of-the-art numerical representations. The application of the method to both open as well as confined axisymmetric swirl-stabilized spray flames shows good agreement with the measured data. Preliminary estimates indicate that it is well within reach of today`s modern parallel computer to do a realistic gas turbine combustor simulation within a reasonable turnaround time. The article provides complete details of the overall algorithm, parallelization, and other numerical issues related to coupling between the three solvers.

Raju, M.S. [Nyma, Inc., Brook Park, OH (United States)

1996-12-01

231

NASA Astrophysics Data System (ADS)

Simple Monte Carlo simulations can assist both the cultural astronomy researcher while the Research Design is developed and the eventual evaluators of research products. Following the method we describe allows assessment of the probability for there to be false positives associated with a site. Even seemingly evocative alignments may be meaningless, depending on the site characteristics and the number of degrees of freedom the researcher allows. In many cases, an observer may have to limit comments to "it is nice and it might be culturally meaningful, rather than saying "it is impressive so it must mean something". We describe a basic language with an associated set of attributes to be cataloged. These can be used to set up simple Monte Carlo simulations for a site. Without collaborating cultural evidence, or trends with similar attributes (for example a number of sites showing the same anticipatory date), the Monte Carlo simulation can be used as a filter to establish the likeliness that the observed alignment phenomena is the result of random factors. Such analysis may temper any eagerness to prematurely attribute cultural meaning to an observation. For the most complete description of an archaeological site, we urge researchers to capture the site attributes in a manner which permits statistical analysis. We also encourage cultural astronomers to record that which does not work, and that which may seem to align, but has no discernable meaning. Properly reporting situational information as tenets of the research design will reduce the subjective nature of archaeoastronomical interpretation. Examples from field work will be discussed.

Hull, Anthony B.; Ambruster, C.; Jewell, E.

2012-01-01

232

This article proposes a combined procedure of the Monte-Carlo and finite-volume method (CMCFVM) for solving radiative heat transfer in absorbing, emitting, and isotropically scattering medium with an isolated boundary heat source. The conventional flux methods such as the finite volume and the discrete ordinate methods are known to be afflicted by the ray effects due to its own angular discretization.

Seung Wook Baek; Do Young Byun; Shin Jae Kang

2000-01-01

233

On processed splitting methods and high-order actions in path-integral Monte Carlo simulations.

Processed splitting methods are particularly well adapted to carry out path-integral Monte Carlo (PIMC) simulations: since one is mainly interested in estimating traces of operators, only the kernel of the method is necessary to approximate the thermal density matrix. Unfortunately, they suffer the same drawback as standard, nonprocessed integrators: kernels of effective order greater than two necessarily involve some negative coefficients. This problem can be circumvented, however, by incorporating modified potentials into the composition, thus rendering schemes of higher effective order. In this work we analyze a family of fourth-order schemes recently proposed in the PIMC setting, paying special attention to their linear stability properties, and justify their observed behavior in practice. We also propose a new fourth-order scheme requiring the same computational cost but with an enlarged stability interval. PMID:20969377

Casas, Fernando

2010-10-21

234

The pressure-particle velocity (PU) impedance measurement technique is an experimental method used to measure the surface impedance and the absorption coefficient of acoustic samples in situ or under free-field conditions. In this paper, the measurement uncertainty of the the absorption coefficient determined using the PU technique is explored applying the Monte Carlo method. It is shown that because of the uncertainty, it is particularly difficult to measure samples with low absorption and that difficulties associated with the localization of the acoustic centers of the sound source and the PU sensor affect the quality of the measurement roughly to the same extent as the errors in the transfer function between pressure and particle velocity do. PMID:21786864

Brandão, Eric; Flesch, Rodolfo C C; Lenzi, Arcanjo; Flesch, Carlos A

2011-07-01

235

Monte Carlo methods for optimizing the piecewise constant Mumford-Shah segmentation model

NASA Astrophysics Data System (ADS)

Natural images are depicted in a computer as pixels on a square grid and neighboring pixels are generally highly correlated. This representation can be mapped naturally to a statistical physics framework on a square lattice. In this paper, we developed an effective use of statistical mechanics to solve the image segmentation problem, which is an outstanding problem in image processing. Our Monte Carlo method using several advanced techniques, including block-spin transformation, Eden clustering and simulated annealing, seeks the solution of the celebrated Mumford-Shah image segmentation model. In particular, the advantage of our method is prominent for the case of multiphase segmentation. Our results verify that statistical physics can be a very efficient approach for image processing.

Watanabe, Hiroshi; Sashida, Satoshi; Okabe, Yutaka; Lee, Hwee Kuan

2011-02-01

236

DSMC calculations for the double ellipse. [direct simulation Monte Carlo method

NASA Technical Reports Server (NTRS)

The direct simulation Monte Carlo (DSMC) method involves the simultaneous computation of the trajectories of thousands of simulated molecules in simulated physical space. Rarefied flow about the double ellipse for test case 6.4.1 has been calculated with the DSMC method of Bird. The gas is assumed to be nonreacting nitrogen flowing at a 30 degree incidence with respect to the body axis, and for the surface boundary conditions, the wall is assumed to be diffuse with full thermal accommodation and at a constant wall temperature of 620 K. A parametric study is presented that considers the effect of variations of computational domain, gas model, cell size, and freestream density on surface quantities.

Moss, James N.; Price, Joseph M.; Celenligil, M. Cevdet

1990-01-01

237

Guidelines for Choosing the Transition Matrix in Monte Carlo Methods Using Markov Chains

NASA Astrophysics Data System (ADS)

The sampling method proposed by Metropolis et al. ( J. Chem. Phys.21 (1953), 1087) requires the simulation of a Markov chain with a specified ? as its stationary distribution. Hastings ( Biometrika57 (1970). 97) outlined a general procedure for constructing and simulating such a Markov chain. The matrix P = { pij} of transition probabilities is constructed using a defined symmetric function s and an arbitrary transition matrix Q. With respect to asymptotic variance reduction, Peskun ( Biometrika60 (1973), 607) determined, for a given Q, the optimum choice for sij. Here, guidelines are given for choosing Q so that the resulting Markov chain sampling method is as precise as is practically possible. Examples illustrating the use of the guidelines, including potential applications to problems in statistical mechanics and to the problem of estimating the probability of an simple event by "hit-ormiss" Monte Carlo in conjunction with Markov chain sampling, are discussed.

Peskun, P. H.

1981-04-01

238

Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings.

It has recently become possible to identify cone photoreceptors in primate retina from multi-electrode recordings of ganglion cell spiking driven by visual stimuli of sufficiently high spatial resolution. In this paper we present a statistical approach to the problem of identifying the number, locations, and color types of the cones observed in this type of experiment. We develop an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using a Linear-Nonlinear-Poisson (LNP) encoding model of ganglion cell spiking output, while analytically integrating out the functional weights between cones and ganglion cells. This method provides information about our posterior certainty about the inferred cone properties, and additionally leads to improvements in both the speed and quality of the inferred cone maps, compared to earlier "greedy" computational approaches. PMID:23194406

Sadeghi, K; Gauthier, J L; Field, G D; Greschner, M; Agne, M; Chichilnisky, E J; Paninski, L

2013-01-01

239

Transient condensation of vapor using a direct simulation Monte Carlo method

Vapor is produced from the ICF event as the x-ray energy is deposited at the first wall of the reactor. This vapor must condense back onto the first wall in a timely fashion (<< 1 s) to establish the necessary conditions for beam propagation and the next ICF event. Transient condensation of vapor is studied on the basis of the Boltzmann equation using a direct simulation Monte Carlo Method. The method describes the molecular behavior of continuum mechanics transition flows in a way consistent with the Boltzmann equation. The thermal resistance of the condensed film is included in the flow representation using a laminar Nusselt analysis to determine the interface temperature of the condensed film. The condensate mass flux in a quasi-steady state is computed and compared with a number of analytical models and experimental data. The results are consistent qualitatively with the experimental data of mercury condensation on a vertical plate.

El-Afify, M.M.; Corradini, M.L.

1989-03-01

240

Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings

It has recently become possible to identify cone photoreceptors in primate retina from multi-electrode recordings of ganglion cell spiking driven by visual stimuli of sufficiently high spatial resolution. In this paper we present a statistical approach to the problem of identifying the number, locations, and color types of the cones observed in this type of experiment. We develop an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using a Linear-Nonlinear-Poisson (LNP) encoding model of ganglion cell spiking output, while analytically integrating out the functional weights between cones and ganglion cells. This method provides information about our posterior certainty about the inferred cone properties, and additionally leads to improvements in both the speed and quality of the inferred cone maps, compared to earlier “greedy” computational approaches.

Sadeghi, K.; Gauthier, J.L.; Field, G.D.; Greschner, M.; Agne, M.; Chichilnisky, E.J.; Paninski, L.

2013-01-01

241

The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)

Bianchini, G.; Burgio, N.; Carta, M. [ENEA C.R. CASACCIA, via Anguillarese, 301, 00123 S. Maria di Galeria Roma (Italy); Peluso, V. [ENEA C.R. BOLOGNA, Via Martiri di Monte Sole, 4, 40129 Bologna (Italy); Fabrizio, V.; Ricci, L. [Univ. of Rome La Sapienza, C/o ENEA C.R. CASACCIA, via Anguillarese, 301, 00123 S. Maria di Galeria Roma (Italy)

2012-07-01

242

Electronic structure of transition metal and f-electron oxides by quantum Monte Carlo methods

NASA Astrophysics Data System (ADS)

We report on many-body quantum Monte Carlo (QMC) calculations of electronic structure of systems with strong correlation effects. These methods have been applied to ambient and high pressure transition metal oxides and, very recently, to selected f-electron oxides such as mineral thorianite (ThO2). QMC methods enabled us to calculate equilibrium characteristics such as cohesion, equilibrium lattice constants, bulk moduli, and electronic gaps with an excellent agreement with experiment without any non-variational parameters. In addition, for selected cases, the equations of state were calculated as well. The calculations were carried out using the state-of-the-art twist-averaged sampling of the Brilloiun zone, small-core Dirac-Fock pseudopotentials and one-particle orbitals from hybrid DFT functionals with varying weight of the exact exchange. This enabled us to build high-accuracy Slater-Jastrow explicitly correlated wavefunctions. In particular, we have employed optimization of the weight of the exact exchange in B3LYP and PBE0 functionals to minimize the fixed-node error in the diffusion Monte Carlo calculations. Instead of empirical fitting, we therefore use variational and explicitly many-body QMC method to find the value of the optimal weight, which falls between 15 and 30%. This finding is further supported also by recent calculations of transition metal-organic systems such as transition metal-porphyrins and others, showing thus a very wide range of its applicability. The calculations of ThO_2 appears to follow the same pattern and enabled to reproduce very well the experimental cohesion and very large electronic gap. In addition, we have made an important progress also in explicit treatment of the spin-orbit interaction which has been so far neglected in QMC calculations. Our studies illustrate the remarkable capabilities of QMC methods for strongly correlated solid systems.

Mitas, L.; Hu, S.; Kolorenc, J.

2012-12-01

243

Risk assessment of water quality using Monte Carlo simulation and artificial neural network method.

There is always uncertainty in any water quality risk assessment. A Monte Carlo simulation (MCS) is regarded as a flexible, efficient method for characterizing such uncertainties. However, the required computational effort for MCS-based risk assessment is great, particularly when the number of random variables is large and the complicated water quality models have to be calculated by a computationally expensive numerical method, such as the finite element method (FEM). To address this issue, this paper presents an improved method that incorporates an artificial neural network (ANN) into the MCS to enhance the computational efficiency of conventional risk assessment. The conventional risk assessment uses the FEM to create multiple water quality models, which can be time consuming or cumbersome. In this paper, an ANN model was used as a substitute for the iterative FEM runs, and thus, the number of water quality models that must be calculated can be dramatically reduced. A case study on the chemical oxygen demand (COD) pollution risks in the Lanzhou section of the Yellow River in China was taken as a reference. Compared with the conventional risk assessment method, the ANN-MCS-based method can save much computational effort without a loss of accuracy. The results show that the proposed method in this paper is more applicable to assess water quality risks. Because the characteristics of this ANN-MCS-based technique are quite general, it is hoped that the technique can also be applied to other MCS-based uncertainty analysis in the environmental field. PMID:23583753

Jiang, Yunchao; Nan, Zhongren; Yang, Sucai

2013-06-15

244

Determination of phase equilibria in confined systems by open pore cell Monte Carlo method

NASA Astrophysics Data System (ADS)

We present a modification of the molecular dynamics simulation method with a unit pore cell with imaginary gas phase [M. Miyahara, T. Yoshioka, and M. Okazaki, J. Chem. Phys. 106, 8124 (1997)] designed for determination of phase equilibria in nanopores. This new method is based on a Monte Carlo technique and it combines the pore cell, opened to the imaginary gas phase (open pore cell), with a gas cell to measure the equilibrium chemical potential of the confined system. The most striking feature of our new method is that the confined system is steadily led to a thermodynamically stable state by forming concave menisci in the open pore cell. This feature of the open pore cell makes it possible to obtain the equilibrium chemical potential with only a single simulation run, unlike existing simulation methods, which need a number of additional runs. We apply the method to evaluate the equilibrium chemical potentials of confined nitrogen in carbon slit pores and silica cylindrical pores at 77 K, and show that the results are in good agreement with those obtained by two conventional thermodynamic integration methods. Moreover, we also show that the proposed method can be particularly useful for determining vapor-liquid and vapor-solid coexistence curves and the triple point of the confined system.

Miyahara, Minoru T.; Tanaka, Hideki

2013-02-01

245

Forwards and Backwards Modelling of Ashfall Hazards in New Zealand by Monte Carlo Methods

NASA Astrophysics Data System (ADS)

We have developed a technique for quantifying the probability of particular thicknesses of airfall ash from a volcanic eruption at any given site, using Monte Carlo methods, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo formulation then allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. We show how this method can handle the effects of multiple volcanic sources by aggregation, each source with its own characteristics. This follows a similar procedure which we have used for earthquake hazard assessment. The result is estimates of the frequency with which any given depth of ash is likely to be deposited at the selected site, accounting for all volcanoes that might affect it. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from ash cores in Auckland can give useful bounds for the likely total volumes erupted from the volcano Mt Egmont/Mt Taranaki, 280 km away, during the last 140,000 years, information difficult to obtain from local tephra stratigraphy.

Hurst, T.; Smith, W. D.; Bibby, H. M.

2003-12-01

246

Monte Carlo Methods for Estimating Interfacial Free Energies and Line Tensions

NASA Astrophysics Data System (ADS)

Excess contributions to the free energy due to interfaces occur for many problems encountered in the statistical physics of condensed matter when coexistence between different phases is possible (e.g. wetting phenomena, nucleation, crystal growth, etc.). This article reviews two methods to estimate both interfacial free energies and line tensions by Monte Carlo simulations of simple models, (e.g. the Ising model, a symmetrical binary Lennard-Jones fluid exhibiting a miscibility gap, and a simple Lennard-Jones fluid). One method is based on thermodynamic integration. This method is useful to study flat and inclined interfaces for Ising lattices, allowing also the estimation of line tensions of three-phase contact lines, when the interfaces meet walls (where "surface fields" may act). A generalization to off-lattice systems is described as well. The second method is based on the sampling of the order parameter distribution of the system throughout the two-phase coexistence region of the model. Both the interface free energies of flat interfaces and of (spherical or cylindrical) droplets (or bubbles) can be estimated, including also systems with walls, where sphere-cap shaped wall-attached droplets occur. The curvature-dependence of the interfacial free energy is discussed, and estimates for the line tensions are compared to results from the thermodynamic integration method. Basic limitations of all these methods are critically discussed, and an outlook on other approaches is given.

Binder, Kurt; Block, Benjamin; Das, Subir K.; Virnau, Peter; Winter, David

2011-08-01

247

NASA Technical Reports Server (NTRS)

The results are reported of two unrelated studies. The first was an investigation of the formulation of the equations for non-uniform unsteady flows, by perturbation of an irrotational flow to obtain the linear Green's equation. The resulting integral equation was found to contain a kernel which could be expressed as the solution of the adjoint flow equation, a linear equation for small perturbations, but with non-constant coefficients determined by the steady flow conditions. It is believed that the non-uniform flow effects may prove important in transonic flutter, and that in such cases, the use of doublet type solutions of the wave equation would then prove to be erroneous. The second task covered an initial investigation into the use of the Monte Carlo method for solution of acoustical field problems. Computed results are given for a rectangular room problem, and for a problem involving a circular duct with a source located at the closed end.

Haviland, J. K.

1974-01-01

248

On choosing effective elasticity tensors using a Monte-Carlo method

NASA Astrophysics Data System (ADS)

A generally anisotropic elasticity tensor can be related to its closest counterparts in various symmetry classes. We refer to these counterparts as effective tensors in these classes. In finding effective tensors, we do not assume a priori orientations of their symmetry planes and axes. Knowledge of orientations of Hookean solids allows us to infer properties of materials represented by these solids. Obtaining orientations and parameter values of effective tensors is a highly nonlinear process involving finding absolute minima for orthogonal projections under all three-dimensional rotations. Given the standard deviations of the components of a generally anisotropic tensor, we examine the influence of measurement errors on the properties of effective tensors. We use a global optimization method to generate thousands of realizations of a generally anisotropic tensor, subject to errors. Using this optimization, we perform a Monte Carlo analysis of distances between that tensor and its counterparts in different symmetry classes, as well as of their orientations and elasticity parameters.

Danek, Tomasz; Slawinski, Michael A.

2014-03-01

249

Accelerating mesh-based Monte Carlo method on modern CPU architectures

In this report, we discuss the use of contemporary ray-tracing techniques to accelerate 3D mesh-based Monte Carlo photon transport simulations. Single Instruction Multiple Data (SIMD) based computation and branch-less design are exploited to accelerate ray-tetrahedron intersection tests and yield a 2-fold speed-up for ray-tracing calculations on a multi-core CPU. As part of this work, we have also studied SIMD-accelerated random number generators and math functions. The combination of these techniques achieved an overall improvement of 22% in simulation speed as compared to using a non-SIMD implementation. We applied this new method to analyze a complex numerical phantom and both the phantom data and the improved code are available as open-source software at http://mcx.sourceforge.net/mmc/.

Fang, Qianqian; Kaeli, David R.

2012-01-01

250

To evaluate the bootstrap current in nonaxisymmetric toroidal plasmas quantitatively, a {delta}f Monte Carlo method is incorporated into the moment approach. From the drift-kinetic equation with the pitch-angle scattering collision operator, the bootstrap current and neoclassical conductivity coefficients are calculated. The neoclassical viscosity is evaluated from these two monoenergetic transport coefficients. Numerical results obtained by the {delta}f Monte Carlo method for a model heliotron are in reasonable agreement with asymptotic formulae and with the results obtained by the variational principle.

Matsuyama, A. [Graduate School of Energy Science, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Isaev, M. Yu. [Nuclear Fusion Institute, RRC Kurchatov Institute, 123182 Moscow (Russian Federation); Watanabe, K. Y.; Suzuki, Y.; Nakajima, N. [National Institute for Fusion Science, Toki, Gifu 509-5292 (Japan); Hanatani, K. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Cooper, W. A.; Tran, T. M. [Centre de Recherches en Physique des Plasmas, Association Euratom-Suisse, Ecole Polytechnique Federale de Lausanne, CH1015 Lausanne (Switzerland)

2009-05-15

251

Uniform-acceptance force-bias Monte Carlo method with time scale to study solid-state diffusion

NASA Astrophysics Data System (ADS)

Monte Carlo (MC) methods have a long-standing history as partners of molecular dynamics (MD) to simulate the evolution of materials at the atomic scale. Among these techniques, the uniform-acceptance force-bias Monte Carlo (UFMC) method [G. Dereli, Mol. Simul.10.1080/08927029208022490 8, 351 (1992)] has recently attracted attention [M. Timonova , Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.81.144107 81, 144107 (2010)] thanks to its apparent capacity of being able to simulate physical processes in a reduced number of iterations compared to classical MD methods. The origin of this efficiency remains, however, unclear. In this work we derive a UFMC method starting from basic thermodynamic principles, which leads to an intuitive and unambiguous formalism. The approach includes a statistically relevant time step per Monte Carlo iteration, showing a significant speed-up compared to MD simulations. This time-stamped force-bias Monte Carlo (tfMC) formalism is tested on both simple one-dimensional and three-dimensional systems. Both test-cases give excellent results in agreement with analytical solutions and literature reports. The inclusion of a time scale, the simplicity of the method, and the enhancement of the time step compared to classical MD methods make this method very appealing for studying the dynamics of many-particle systems.

Mees, Maarten J.; Pourtois, Geoffrey; Neyts, Erik C.; Thijsse, Barend J.; Stesmans, André

2012-04-01

252

NASA Astrophysics Data System (ADS)

Data assimilation is routinely employed in meteorology, engineering and computer sciences to optimally combine noisy observations with prior model information for obtaining better estimates of a state, and thus better forecasts, than achieved by ignoring data uncertainties. Earthquake forecasting, too, suffers from measurement errors and partial model information and may thus gain significantly from data assimilation. We present perhaps the first fully implementable data assimilation method for earthquake forecasts generated by a point-process model of seismicity. We test the method on a synthetic and pedagogical example of a renewal process observed in noise, which is relevant for the seismic gap hypothesis, models of characteristic earthquakes and recurrence statistics of large quakes inferred from paleoseismic data records. To address the non-Gaussian statistics of earthquakes, we use sequential Monte Carlo methods, a set of flexible simulation-based methods for recursively estimating arbitrary posterior distributions. We perform extensive numerical simulations to demonstrate the feasibility and benefits of forecasting earthquakes based on data assimilation.

Werner, M. J.; Ide, K.; Sornette, D.

2011-02-01

253

Uncertainty Quantification of Prompt Fission Neutron Spectra Using the Unified Monte Carlo Method

NASA Astrophysics Data System (ADS)

In the ENDF/B-VII.1 nuclear data library, the existing covariance evaluations of the prompt fission neutron spectra (PFNS) were computed by combining the available experimental differential data with theoretical model calculations, relying on the use of a first-order linear Bayesan approach, the Kalman filter. This approach assumes that the theoretical model response to changes in input model parameters be linear about the a priori central values. While the Unified Monte Carlo (UMC) method remains a Bayesian approach, like the Kalman filter, this method does not make any assumption about the linearity of the model response or shape of the a posteriori distribution of the parameters. By sampling from a distribution centered about the a priori model parameters, the UMC method computes the moments of the a posteriori parameter distribution. As the number of samples increases, the statistical noise in the computed a posteriori moments decrease and an appropriately converged solution corresponding to the true mean of the a posteriori PDF results. The UMC method has been successfully implemented using both a uniform and Gaussian sampling distribution and has been used for the evaluation of the PFNS and its associated uncertainties. While many of the UMC results are similar to the first-order Kalman filter results, significant differences are shown when experimental data are excluded from the evaluation process. When experimental data are included a few small nonlinearities are present in the high outgoing energy tail of the PFNS.

Rising, M. E.; Talou, P.; Prinja, A. K.

2014-04-01

254

Low-Density Nozzle Flow by the Direct Simulation Monte Carlo and Continuum Methods

NASA Technical Reports Server (NTRS)

Two different approaches, the direct simulation Monte Carlo (DSMC) method based on molecular gasdynamics, and a finite-volume approximation of the Navier-Stokes equations, which are based on continuum gasdynamics, are employed in the analysis of a low-density gas flow in a small converging-diverging nozzle. The fluid experiences various kinds of flow regimes including continuum, slip, transition, and free-molecular. Results from the two numerical methods are compared with Rothe's experimental data, in which density and rotational temperature variations along the centerline and at various locations inside a low-density nozzle were measured by the electron-beam fluorescence technique. The continuum approach showed good agreement with the experimental data as far as density is concerned. The results from the DSMC method showed good agreement with the experimental data, both in the density and the rotational temperature. It is also shown that the simulation parameters, such as the gas/surface interaction model, the energy exchange model between rotational and translational modes, and the viscosity-temperature exponent, have substantial effects on the results of the DSMC method.

Chung, Chang-Hong; Kim, Sku C.; Stubbs, Robert M.; Dewitt, Kenneth J.

1994-01-01

255

Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas

NASA Astrophysics Data System (ADS)

A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau-Fokker-Planck equations by Boltzmann equations of quasi-Maxwellian kind. It means that the total collision frequency for the corresponding Boltzmann equation does not depend on the velocities. This allows to make the simulation process very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes the well-known methods of Takizuka and Abe (1977) [12] and Nanbu (1997) as particular cases, and generalizes the approach of Bobylev and Nanbu (2000). The numerical scheme of this paper is simpler than the schemes by Takizuka and Abe [12] and by Nanbu. We derive it for the general case of multicomponent plasmas and show some numerical tests for the two-component (electrons and ions) case. An optimal choice of parameters for speeding up the computations is also discussed. It is also proved that the order of approximation is not worse than O(?{?}), where ? is a parameter of approximation being equivalent to the time step ?t in earlier methods. A similar estimate is obtained for the methods of Takizuka and Abe and Nanbu.

Bobylev, A. V.; Potapenko, I. F.

2013-08-01

256

Implementation of the probability table method in a continuous-energy Monte Carlo code system

RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.

Sutton, T.M.; Brown, F.B. [Lockheed Martin Corp., Schenectady, NY (United States)

1998-10-01

257

Recent developments in quantum Monte Carlo methods for electronic structure of atomic clusters

NASA Astrophysics Data System (ADS)

Recent developments of quantum Monte Carlo (QMC) for electronic structure calculations of clusters, other nanomaterials and quantum systems will be reviewed. QMC methodology is based on a combination of analytical insights about properties of exact wavefunctions, explicit treatment of electron-electron correlation and robustness of computational stochastic techniques. In the course of QMC development for calculations of real materials, small and medium size clusters proved to be invaluable systems both for testing and for revealing unique insights into electron correlation effects in nanostructured materials. The method shows remarkable accuracy which will be demonstrated on calculations of magnetic states of transition metal atoms encapsulated in silicon cluster cages, optical excitations in quantum nanodots and molecules and on studies of reactions in biomolecular metallic centers. Indeed, in some cases QMC turned out to be the only feasible method to provide the necessary accuracy. I will also discuss current QMC developments in using correlated sampling techniques for efficient evaluation of energy differences, efforts to reach beyond the fixed-node approximation and on incorporating QMC methods into multi-scale simulation approaches. In collaboration with P. Sen, L.K. Wagner, Z.M. Helms, M. Bajdich, G. Drobny, and J.C. Grossman. Supported by NSF, ONR and DARPA.

Mitas, Lubos

2004-03-01

258

Statistical Properties of Nuclei by the Shell Model Monte Carlo Method

NASA Astrophysics Data System (ADS)

We use quantum Monte Carlo methods in the framework of the interacting nuclear shell model to calculate the statistical properties of nuclei at finite temperature and/or excitation energies. With this approach we can carry out realistic calculations in much larger configuration spaces than are possible by conventional methods. A major application of the methods has been the microscopic calculation of nuclear partition functions and level densities, taking into account both correlations and shell effects. Our results for nuclei in the mass region A ~ 50 - 70 are in remarkably good agreement with experimental level densities without any adjustable parameters and are an improvement over empirical formulas. We have recently extended the shell model theory of level statistics to higher temperatures, including continuum effects. We have also constructed simple statistical models to explain the dependence of the microscopically calculated level densities on good quantum numbers such as parity. Thermal signatures of pairing correlations are identified through odd-even effects in the heat capacity.

Alhassid, Y.

2005-05-01

259

NASA Astrophysics Data System (ADS)

One of the most useful tools for modelling rarefied hypersonic flows is the Direct Simulation Monte Carlo (DSMC) method. Simulator particle movement and collision calculations are combined with statistical procedures to model thermal non-equilibrium flow-fields described by the Boltzmann equation. The Macroscopic Chemistry Method for DSMC simulations was developed to simplify the inclusion of complex thermal non-equilibrium chemistry. The macroscopic approach uses statistical information which is calculated during the DSMC solution process in the modelling procedures. Here it is shown how inclusion of macroscopic information in models of chemical kinetics, electronic excitation, ionization, and radiation can enhance the capabilities of DSMC to model flow-fields where a range of physical processes occur. The approach is applied to the modelling of a 6.4 km/s nitrogen shock wave and results are compared with those from existing shock-tube experiments and continuum calculations. Reasonable agreement between the methods is obtained. The quality of the comparison is highly dependent on the set of vibrational relaxation and chemical kinetic parameters employed.

Goldsworthy, M. J.

2012-10-01

260

Uncertainty and confidence intervals in optical design using the Monte Carlo ray-trace method

NASA Astrophysics Data System (ADS)

12 The increasing use of probabilistic methods, such as the Monte Carlo ray-trace (MCRT) method, in thermal radiation and optical modeling, has created a general awareness in the community of the need for a protocol to predict, to a specified level of confidence, the uncertainty of the results obtained using these methods. This paper presents such a protocol applied to models of radiometric channels used in space-based Earth observations. It is anticipated that the same protocol, with suitable modification, may be extended to data from actual instruments. The authors and their colleagues have developed a powerful generic MCRT- based computational environment that, among other features, is capable of simulating radiative exchange among surfaces and within enclosures. As in any MCRT thermal- radiative/optical model, the spatial resolution and accuracy of the results obtained depend on the fineness of the mesh, the number of rays traced, and the accuracy of directional and spectral surface property models. The protocol presented in this paper identifies and quantifies the contribution of these factors to the ultimate uncertainty in predicted results and to their related confidence intervals.

Sanchez, Maria C.; Nevarez, Felix J.; Mahan, J. Robert; Priestley, Kory J.

2001-02-01

261

Density-of-states based Monte Carlo methods for simulation of biological systems

NASA Astrophysics Data System (ADS)

We have developed density-of-states [1] based Monte Carlo techniques for simulation of biological molecules. Two such methods are discussed. The first, Configurational Temperature Density of States (CTDOS) [2], relies on computing the density of states of a peptide system from knowledge of its configurational temperature. The reciprocal of this intrinsic temperature, computed from instantaneous configurational information of the system, is integrated to arrive at the density of states. The method shows improved efficiency and accuracy over techniques that are based on histograms of random visits to distinct energy states. The second approach, Expanded Ensemble Density of States (EXEDOS), incorporates elements from both the random walk method and the expanded ensemble formalism. It is used in this work to study mechanical deformation of model peptides. Results are presented in the form of force-extension curves and the corresponding potentials of mean force. The application of this proposed technique is further generalized to other biological systems; results will be presented for ion transport through protein channels, base stacking in nucleic acids and hybridization of DNA strands. [1]. F. Wang and D. P. Landau, Phys. Rev. Lett., 86, 2050 (2001). [2]. N. Rathore, T. A. Knotts IV and J. J. de Pablo, Biophys. J., Dec. (2003).

Rathore, Nitin; Knotts, Thomas A.; de Pablo, Juan J.

2004-03-01

262

Non-unitary Quantum Monte Carlo method for transport of atomic states through solids

NASA Astrophysics Data System (ADS)

We present a new quantum trajectory Monte Carlo (QTMC) method describing the time development of the internal state of fast highly charged ions subject to collisions and to spontaneous radiative decay during transport through solids. Our method describes both the buildup of coherences and the decoherence of the open quantum system due to the interaction with its environment. The dynamics of the reduced density matrix is governed by a Lindblad master equation that can be solved in terms of QTMC sampling [1]. For systems involving a high-dimensional Hilbert space the QTMC method is advantageous in terms of computer storage compared to a direct solution of the underlying Lindblad master equation. In practice, however, the standard Lindblad equation can be of limited value because it describes strictly unitary time transformations of the reduced density matrix. We have developed a generalized non-unitary Lindblad form (and its QTMC implementation) for atomic systems in which only finite subspaces can be represented within any realistic basis size and the coupling to the complement cannot be neglected. [1] T. Minami, et. al., Phys. Rev. A 67, 022902 (2003).

Seliger, Marek; Minami, Tatsuya; Reinhold, Carlos O.; Burgdorfer, Joachim

2004-05-01

263

Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications. PMID:24800943

Farah, J; Martinetti, F; Sayah, R; Lacoste, V; Donadille, L; Trompier, F; Nauraye, C; De Marzi, L; Vabre, I; Delacroix, S; Hérault, J; Clairand, I

2014-06-01

264

NASA Astrophysics Data System (ADS)

Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando–Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.

Farah, J.; Martinetti, F.; Sayah, R.; Lacoste, V.; Donadille, L.; Trompier, F.; Nauraye, C.; De Marzi, L.; Vabre, I.; Delacroix, S.; Hérault, J.; Clairand, I.

2014-06-01

265

The shock Hugoniot of deuterium at pressures above 1 Mbar is calculated by the path-integral Monte Carlo method without introducing additional physical assumptions and approximations. The results obtained are compared to calculations by other authors, various theoretical models, and experimental data.

Filinov, V.S.; Levashov, P.R.; Fortov, V.E. [Institute for High Energy Densities, Associated Institute for High Temperatures, Russian Academy of Sciences, Izhorskaya ul. 13/19, Moscow, 125412 (Russian Federation); Bonitz, M. [Institute of Theoretical Physics and Astrophysics, University of Kiel, Kiel (Germany)

2005-08-15

266

The purpose of this work was to extend the verification of Monte Carlo based methods for estimating radiation dose in computed tomography (CT) exams beyond a single CT scanner to a multidetector CT (MDCT) scanner, and from cylindrical CTDI phantom measurements to both cylindrical and physical anthropomorphic phantoms. Both cylindrical and physical anthropomorphic phantoms were scanned on an MDCT under

J. J. DeMarco; C. H. Cagnon; D. D. Cody; D. M. Stevens; C. H. McCollough; J. O'Daniel; M. F. McNitt-Gray

2005-01-01

267

Quantitative analysis of cathodoluminescence phenomena in InGaN/GaN QW by Monte Carlo method

NASA Astrophysics Data System (ADS)

In this paper, cathodoluminescence from InGaN/GaN single quantum wells grown on the facets of the GaN nano-pyramids has been quantitatively studied by Monte Carlo method. The influence of primary electron beam energy and the nanopyramid angle on generation of electron-hole pairs in individual parts of nanostructure has been studied by Monte Carlo simulations. The evolution of the GaN- and InGaN-related cathodoluminescence spectral lines with primary electron beam energy and angle of incidence has been assessed from the recombination rates in individual parts of the structure and compared with cathodoluminescence spectra measured at various beam energies. The possibility to determine the diffusion length of generated carriers in the structures like InGaN/GaN quantum wells using developed Monte Carlo simulator and CL measurements has been demonstrated.

Priesol, J.; Šatka, A.; Uherek, F.; Donoval, D.; Shields, P.; Allsopp, D. W. E.

2013-03-01

268

NASA Astrophysics Data System (ADS)

This paper presents a description of the mathematical model of a video frame processing method. The basis of the model is a coordinate system rotation matrix that uses the direction cosines of one coordinate system relative to another. A description of the estimation of direction cosines via the Monte Carlo method is given. An experimental method for the rotation matrix parameter definition and definition errors is described.

Novalov, A. A.; Nikitushkin, R. A.; Boldacheva, L. A.

2011-12-01

269

A discussion on validity of the diffusion theory by Monte Carlo method

NASA Astrophysics Data System (ADS)

Diffusion theory was widely used as a basis of the experiments and methods in determining the optical properties of biological tissues. A simple analytical solution could be obtained easily from the diffusion equation after a series of approximations. Thus, a misinterpret of analytical solution would be made: while the effective attenuation coefficient of several semi-infinite bio-tissues were the same, the distribution of light fluence in the tissues would be the same. In order to assess the validity of knowledge above, depth resolved internal fluence of several semi-infinite biological tissues which have the same effective attenuation coefficient were simulated with wide collimated beam in the paper by using Monte Carlo method in different condition. Also, the influence of bio-tissue refractive index on the distribution of light fluence was discussed in detail. Our results showed that, when the refractive index of several bio-tissues which had the same effective attenuation coefficient were the same, the depth resolved internal fluence would be the same; otherwise, the depth resolved internal fluence would be not the same. The change of refractive index of tissue would have affection on the light depth distribution in tissue. Therefore, the refractive index is an important optical property of tissue, and should be taken in account while using the diffusion approximation theory.

Peng, Dong-Qing; Li, Hui; Xie, Shusen

2008-12-01

270

Monte Carlo models of proton therapy treatment heads are being used to improve beam delivery systems and to calculate the radiation field for patient dose calculations. The achievable accuracy of the model depends on the exact knowledge of the treatment head geometry and time structure, the material characteristics, and the underlying physics. This work aimed at studying the uncertainties in treatment head simulations for passive scattering proton therapy. The sensitivities of spread-out Bragg peak (SOBP) dose distributions on material densities, mean ionization potentials, initial proton beam energy spread and spot size were investigated. An improved understanding of the nature of these parameters may help to improve agreement between calculated and measured SOBP dose distributions and to ensure that the range, modulation width, and uniformity are within clinical tolerance levels. Furthermore, we present a method to make small corrections to the uniformity of spread-out Bragg peaks by utilizing the time structure of the beam delivery. In addition, we re-commissioned the models of the two proton treatment heads located at our facility using the aforementioned correction methods presented in this paper.

Bednarz, Bryan; Lu, Hsiao-Ming; Engelsman, Martijn; Paganetti, Harald

2011-01-01

271

LAPS parallel data and communication for particle and Monte Carlo methods

NASA Astrophysics Data System (ADS)

LAPS provides parallel data structure and communication infrastructure for plasma simulation using particle and Monte-Carlo methods. This supplements the parallel data and communication provided by PETSc for grid-based PDE solvers. They nevertheless share non-overlapping block structured grids with three dimensional domain decomposition, that are generated by LAPS griding package using Winslow/Monge-Kantorovich methods. The connectivity matrix of the 3D domain decomposition sets the nearest-neighbor communication pattern. The communication buffers stores the boundary-crossing markers (particles) states using linked lists. Standard MPI send/recv exchanges the transpassing marker information in these dynamically assembled lists. As an initial application, we will implement a parallel electrostatic particle-in-cell solver to compute the plasma transport in a Field Reversed Configuration (FRC) and a tokamak scrape-off-layer (SOL). Full particle is integrated for FRC, while drift-kinetic equation is solved for the tokamak SOL. Particular attention will be devoted to the ambipolar potential due to non-integrable ion orbits in FRC and drift orbits crossing the magnetic separatrix in a tokamak.

Corbetta, Alessandro; Delzanno, Gia Luca; Guo, Zehua; Srinivasan, Bhuvana; Tang, Xianzhu

2011-11-01

272

Uncertainty quantification through the Monte Carlo method in a cloud computing setting

NASA Astrophysics Data System (ADS)

The Monte Carlo (MC) method is the most common technique used for uncertainty quantification, due to its simplicity and good statistical results. However, its computational cost is extremely high, and, in many cases, prohibitive. Fortunately, the MC algorithm is easily parallelizable, which allows its use in simulations where the computation of a single realization is very costly. This work presents a methodology for the parallelization of the MC method, in the context of cloud computing. This strategy is based on the MapReduce paradigm, and allows an efficient distribution of tasks in the cloud. This methodology is illustrated on a problem of structural dynamics that is subject to uncertainties. The results show that the technique is capable of producing good results concerning statistical moments of low order. It is shown that even a simple problem may require many realizations for convergence of histograms, which makes the cloud computing strategy very attractive (due to its high scalability capacity and low-cost). Additionally, the results regarding the time of processing and storage space usage allow one to qualify this new methodology as a solution for simulations that require a number of MC realizations beyond the standard.

Cunha, Americo; Nasser, Rafael; Sampaio, Rubens; Lopes, Hélio; Breitman, Karin

2014-05-01

273

NASA Astrophysics Data System (ADS)

Nowadays, Monte Carlo models of proton therapy treatment heads are being used to improve beam delivery systems and to calculate the radiation field for patient dose calculations. The achievable accuracy of the model depends on the exact knowledge of the treatment head geometry and time structure, the material characteristics, and the underlying physics. This work aimed at studying the uncertainties in treatment head simulations for passive scattering proton therapy. The sensitivities of spread-out Bragg peak (SOBP) dose distributions on material densities, mean ionization potentials, initial proton beam energy spread and spot size were investigated. An improved understanding of the nature of these parameters may help to improve agreement between calculated and measured SOBP dose distributions and to ensure that the range, modulation width, and uniformity are within clinical tolerance levels. Furthermore, we present a method to make small corrections to the uniformity of spread-out Bragg peaks by utilizing the time structure of the beam delivery. In addition, we re-commissioned the models of the two proton treatment heads located at our facility using the aforementioned correction methods presented in this paper.

Bednarz, Bryan; Lu, Hsiao-Ming; Engelsman, Martijn; Paganetti, Harald

2011-05-01

274

Variational Monte Carlo Methods for Strongly Correlated Quantum Systems on Multileg Ladders

NASA Astrophysics Data System (ADS)

Quantum mechanical systems of strongly interacting particles in two dimensions comprise a realm of condensed matter physics for which there remain many unanswered theoretical questions. In particular, the most formidable challenges may lie in cases where the ground states show no signs of ordering, break no symmetries, and support many gapless excitations. Such systems are known to exhibit exotic, disordered ground states that are notoriously difficult to study analytically using traditional perturbation techniques or numerically using the most recent methods (e.g., tensor network states) due to the large amount of spatial entanglement. Slave particle descriptions provide a glimmer of hope in the attempt to capture the fundamental, low-energy physics of these highly non-trivial phases of matter. To this end, this dissertation describes the construction and implementation of trial wave functions for use with variational Monte Carlo techniques that can easily model slave particle states. While these methods are extremely computationally tractable in two dimensions, we have applied them here to quasi-one-dimensional systems so that the results of other numerical techniques, such as the density matrix renormalization group, can be directly compared to those determined by the trial wave functions and so that exclusively one-dimensional analytic approaches, namely bosonization, can be employed. While the focus here is on the use of variational Monte Carlo, the sum of these different numerical and analytical tools has yielded a remarkable amount of insight into several exotic quantum ground states. In particular, the results of research on the d-wave Bose liquid phase, an uncondensed state of strongly correlated hard-core bosons living on the square lattice whose wave function exhibits a d-wave sign structure, and the spin Bose-metal phase, a spin-1/2, SU(2) invariant spin liquid of strongly correlated spins living on the triangular lattice, will be presented. Both phases support gapless excitations along surfaces in momentum space in two spatial dimensions and at incommensurate wave vectors in quasi-one dimension, where we have studied them on three- and four-leg ladders. An extension of this work to the study of d-wave correlated itinerant electrons will be discussed.

Block, Matthew S.

275

NASA Astrophysics Data System (ADS)

A reverse Monte Carlo (RMC) method is developed to obtain the energy loss function (ELF) and optical constants from a measured reflection electron energy-loss spectroscopy (REELS) spectrum by an iterative Monte Carlo (MC) simulation procedure. The method combines the simulated annealing method, i.e., a Markov chain Monte Carlo (MCMC) sampling of oscillator parameters, surface and bulk excitation weighting factors, and band gap energy, with a conventional MC simulation of electron interaction with solids, which acts as a single step of MCMC sampling in this RMC method. To examine the reliability of this method, we have verified that the output data of the dielectric function are essentially independent of the initial values of the trial parameters, which is a basic property of a MCMC method. The optical constants derived for SiO2 in the energy loss range of 8-90 eV are in good agreement with other available data, and relevant bulk ELFs are checked by oscillator strength-sum and perfect-screening-sum rules. Our results show that the dielectric function can be obtained by the RMC method even with a wide range of initial trial parameters. The RMC method is thus a general and effective method for determining the optical properties of solids from REELS measurements.

Da, B.; Sun, Y.; Mao, S. F.; Zhang, Z. M.; Jin, H.; Yoshikawa, H.; Tanuma, S.; Ding, Z. J.

2013-06-01

276

We present a hybrid method that combines a multilayered scaling method and a perturbation method to speed up the Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with finite-size tumor-like heterogeneities. The proposed method consists of two steps. In the first step, a set of photon trajectory information generated from a baseline Monte Carlo simulation is utilized to scale the exit weight and exit distance of survival photons for the multilayered tissue model. In the second step, another set of photon trajectory information, including the locations of all collision events from the baseline simulation and the scaling result obtained from the first step, is employed by the perturbation Monte Carlo method to estimate diffuse reflectance from the multilayered tissue model with tumor-like heterogeneities. Our method is demonstrated to shorten simulation time by several orders of magnitude. Moreover, this hybrid method works for a larger range of probe configurations and tumor models than the scaling method or the perturbation method alone. PMID:22352630

Zhu, Caigang; Liu, Quan

2012-01-01

277

Statistical analysis of chemical transformation kinetics using Markov-Chain Monte Carlo methods.

For the risk assessment of chemicals intentionally released into the environment, as, e.g., pesticides, it is indispensable to investigate their environmental fate. Main characteristics in this context are transformation rates and partitioning behavior. In most cases the relevant parameters are not directly measurable but are determined indirectly from experimentally determined concentrations in various environmental compartments. Usually this is done by fitting mathematical models, which are usually nonlinear, to the observed data and such deriving estimates of the parameter values. Statistical analysis is then used to judge the uncertainty of the estimates. Of particular interest in this context is the question whether degradation rates are significantly different from zero. Standard procedure is to use nonlinear least-squares methods to fit the models and to estimate the standard errors of the estimated parameters from Fisher's Information matrix and estimated level of measurement noise. This, however, frequently leads to counterintuitive results as the estimated probability distributions of the parameters based on local linearization of the optimized models are often too wide or at least differ significantly in shape from the real distribution. In this paper we identify the shortcoming of this procedure and propose a statistically valid approach based on Markov-Chain Monte Carlo sampling that is appropriate to determine the real probability distribution of model parameters. The effectiveness of this method is demonstrated on three data sets. Although it is generally applicable to different problems where model parameters are to be inferred, in the present case for simplicity we restrict the discussion to the evaluation of metabolic degradation of chemicals in soil. It is shown that the method is successfully applicable to problems of different complexity. We applied it to kinetic data from compounds with one and five metabolites. Additionally, using simulated data, it is shown that the MCMC method estimates the real probability distributions of parameters well and much better than the standard optimization approach. PMID:21526818

Görlitz, Linus; Gao, Zhenglei; Schmitt, Walter

2011-05-15

278

Structural properties of sodium microclusters (n=4-34) using a Monte Carlo growth method

The structural and electronic properties of small sodium clusters are investigated using a distance-dependent extension of the tight-binding (Hu¨ckel) model and a Monte Carlo growth algorithm for the search of the lowest energy isomers. The efficiency and advantages of the Monte Carlo growth algorithm are discussed and the building scheme of sodium microclusters around constituting seeds is explained in details.

Romuald Poteau; Fernand Spiegelmann

1993-01-01

279

Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations

Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm{sup 2}) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within {+-}1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within {+-}2.0%({sigma}{sub MC}=0.8%) in lung ({rho}=0.24 g cm{sup -3}) and within {+-}2.9%({sigma}{sub MC}=0.8%) in low-density lung ({rho}=0.1 g cm{sup -3}). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity ({rho}=0.001 g cm{sup -3}) were found to be within the range of {+-}1.5% to {+-}4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark ({sigma}{sub MC}=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within {+-}2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is capable of modeling radiotherapy dose deposition with accuracy only previously achievable with Monte Carlo techniques.

Bush, K.; Gagne, I. M.; Zavgorodni, S.; Ansbacher, W.; Beckham, W. [Department of Medical Physics, British Columbia Cancer Agency-Vancouver Island Center, Victoria, British Columbia V8R 6V5 (Canada)

2011-04-15

280

NASA Astrophysics Data System (ADS)

Using the homogeneous electron gas (HEG) as a model, we investigate the sources of error in the ``initiator'' adaptation to full configuration interaction quantum Monte Carlo (i-FCIQMC), with a view to accelerating convergence. In particular, we find that the fixed-shift phase, where the walker number is allowed to grow slowly, can be used to effectively assess stochastic and initiator error. Using this approach we provide simple explanations for the internal parameters of an i-FCIQMC simulation. We exploit the consistent basis sets and adjustable correlation strength of the HEG to analyze properties of the algorithm, and present finite basis benchmark energies for N = 14 over a range of densities 0.5 <= rs <= 5.0 a.u. A single-point extrapolation scheme is introduced to produce complete basis energies for 14, 38, and 54 electrons. It is empirically found that, in the weakly correlated regime, the computational cost scales linearly with the plane wave basis set size, which is justifiable on physical grounds. We expect the fixed-shift strategy to reduce the computational cost of many i-FCIQMC calculations of weakly correlated systems. In addition, we provide benchmarks for the electron gas, to be used by other quantum chemical methods in exploring periodic solid state systems.

Shepherd, James J.; Booth, George H.; Alavi, Ali

2012-06-01

281

NASA Astrophysics Data System (ADS)

A new class of two-dimensional magnetic materials Cu9X2(cpa)6?xH2O ( cpa=2 -carboxypentonic acid; X=F,Cl,Br ) was recently fabricated in which Cu sites form a triangular kagome lattice (TKL). As the simplest model of geometric frustration in such a system, we study the thermodynamics of Ising spins on the TKL using exact analytic method as well as Monte Carlo simulations. We present the free energy, internal energy, specific heat, entropy, sublattice magnetizations, and susceptibility. We describe the rich phase diagram of the model as a function of coupling constants, temperature, and applied magnetic field. For frustrated interactions in the absence of applied field, the ground state is a spin liquid phase with residual entropy per spin s0/kB=(1)/(9)ln72?0.4752… . In weak applied field, the system maps to the dimer model on a honeycomb lattice, with residual entropy 0.0359 per spin and quasi-long-range order with power-law spin-spin correlations that should be detectable by neutron scattering. The power-law correlations become exponential at finite temperatures, but the correlation length may still be long.

Loh, Y. L.; Yao, D. X.; Carlson, E. W.

2008-04-01

282

Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.

Hall, Howard L [ORNL

2012-01-01

283

Deterministic flows of order-parameters in stochastic processes of quantum Monte Carlo method

NASA Astrophysics Data System (ADS)

In terms of the stochastic process of quantum-mechanical version of Markov chain Monte Carlo method (the MCMC), we analytically derive macroscopically deterministic flow equations of order parameters such as spontaneous magnetization in infinite-range (d(= ?)-dimensional) quantum spin systems. By means of the Trotter decomposition, we consider the transition probability of Glauber-type dynamics of microscopic states for the corresponding (d + 1)-dimensional classical system. Under the static approximation, differential equations with respect to macroscopic order parameters are explicitly obtained from the master equation that describes the microscopic-law. In the steady state, we show that the equations are identical to the saddle point equations for the equilibrium state of the same system. The equation for the dynamical Ising model is recovered in the classical limit. We also check the validity of the static approximation by making use of computer simulations for finite size systems and discuss several possible extensions of our approach to disordered spin systems for statistical-mechanical informatics. Especially, we shall use our procedure to evaluate the decoding process of Bayesian image restoration. With the assistance of the concept of dynamical replica theory (the DRT), we derive the zero-temperature flow equation of image restoration measure showing some 'non-monotonic' behaviour in its time evolution.

Inoue, Jun-ichi

2010-06-01

284

NASA Astrophysics Data System (ADS)

The electronic structure and dielectric property in electronic ferroelectricity, where electric polarization is driven by an electronic charge order without inversion symmetry, are studied. Motivated by layered iron oxides, the roles of quantum fluctuation in ferroelectricity in a paired-triangular lattice are focused on. Three types of V-t model, where the intersite Coulomb interaction V and the electron transfer t for spinless fermions are taken into account, are examined by the variational Monte-Carlo method with the Gutzwiller-type correlation factor. It is shown that the electron transfer between the triangular layers corresponding to the interlayer polarization fluctuation promotes a three-fold charge order associated with electric polarization. This result is in high contrast to the usual result observed in the hydrogen-bond type ferroelectricities and quantum paraelectric oxides, where the ferroelectric order is suppressed by quantum fluctuation. The spin degree of freedom of electrons and a realistic interlayer geometry for layered iron oxides further stabilize the polar charge ordered state. The implications of the numerical results for layered iron oxides are discussed.

Watanabe, Tsutomu; Ishihara, Sumio

2010-11-01

285

Applications of Monte Carlo methods for the analysis of MHTGR case of the PROTEUS benchmark

Monte Carlo methods, as implemented in the MCNP code, have been used to analyze the neutronics characteristics of benchmarks related to Modular High Temperature Gas-Cooled Reactors. The benchmarks are idealized versions of the Japanes (VHTRC) and Swiss (PROTEUS) facilities and an actual configurations of the PROTEUS Configuration I experiment. The purpose of the unit cell benchmarks is to compare multiplication constants, critical bucklings, migration lengths, reaction rates and spectral indices. The purpose of the full reactors benchmarks is to compare multiplication constants, reaction rates, spectral indices, neutron balances, reaction rates profiles, temperature coefficients of reactivity and effective delayed neutron fractions. All of these parameters can be calculated by MCNP, which can provide a very detailed model of the geometry of the configurations, from fuel particles to entire fuel assemblies, using at the same time a continuous energy model. These characteristics make MCNP a very useful tool to analyze these MHTGR benchmarks. We have used the MCNP latest version, 4.x, eld = 01/12/93 with an ENDF/B-V cross section library. This library does not yet contain temperature dependent resonance materials, so all calculations correspond to room temperature, T = 300{degree}K. Two separate reports were made -- one for the VHTRC, the other for the PROTEUS benchmark.

Difilippo, F.C.

1994-04-01

286

NASA Astrophysics Data System (ADS)

The numerical accuracy of the results obtained using the multicanonical Monte Carlo (MMC) algorithm is strongly dependent on the choice of the step size, which is the range of the MMC perturbation from one sample to the next. The proper choice of the MMC step size leads to much faster statistical convergence of the algorithm for the calculation of rare events. One relevant application of this method is the calculation of the probability of the bins in the tail of the discretized probability density function of the differential group delay between the principal states of polarization due to polarization mode dispersion. We observed that the optimum MMC performance is strongly correlated with the inflection point of the actual transition rate from one bin to the next. We also observed that the optimum step size does not correspond to any specific value of the acceptance rate of the transitions in MMC. The results of this study can be applied to the improvement of the performance of MMC applied to the calculation of other rare events of interest in optical communications, such as the bit error ratio and pattern dependence in optical fiber systems with coherent receivers.

Yamamoto, Alexandre Y.; Oliveira, Aurenice M.; Lima, Ivan T.

2014-05-01

287

Monte Carlo methods for radiative transfer in quasi-isothermal participating media

NASA Astrophysics Data System (ADS)

Based on the superposition principle, we propose in this study a Monte Carlo (MC) formulation for radiative transfer in quasi-isothermal media which consists in directly computing the difference between the actual radiative field and the equilibrium radiative field at the minimum temperature in the medium. This shift formulation is implemented for the prediction of radiative fluxes and volumetric powers in a combined free convection-radiation problem where a differentially heated cubical cavity is filled with air with a small amount of H2O and CO2. High resolution spectra are used to describe radiative properties of the gas in this 3D configuration. We show that, compared to the standard analog MC method, the shift approach leads to a huge saving of required computational times to reach a given convergence level. In addition, this approach is compared to reciprocal MC formulations and is shown to be more efficient for the prediction of wall fluxes but slightly less efficient for volumetric powers.

Soucasse, Laurent; Rivière, Philippe; Soufiani, Anouar

2013-10-01

288

Improving Bayesian analysis for LISA Pathfinder using an efficient Markov Chain Monte Carlo method

NASA Astrophysics Data System (ADS)

We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of the LISA Pathfinder satellite. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to LISA Pathfinder data. For this experiment, we return parameter values that are all within ˜1 ? of the injected values. When we analyse the accuracy of our parameter estimation in terms of the effect they have on the force-per-unit of mass noise, we find that the induced errors are three orders of magnitude less than the expected experimental uncertainty in the power spectral density.

Ferraioli, Luigi; Porter, Edward K.; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Gibert, Ferran; Hewitson, Martin; Hueller, Mauro; Karnesis, Nikolaos; Korsakova, Natalia; Nofrarias, Miquel; Plagnol, Eric; Vitale, Stefano

2014-02-01

289

Summarizing the output of a Monte Carlo method for uncertainty evaluation

NASA Astrophysics Data System (ADS)

The ‘Guide to the Expression of Uncertainty in Measurement’ (GUM) requires that the way a measurement uncertainty is expressed should be transferable. It should be possible to use directly the uncertainty evaluated for one measurement as a component in evaluating the uncertainty for another measurement that depends on the first. Although the method for uncertainty evaluation described in the GUM meets this requirement of transferability, it is less clear how this requirement is to be achieved when GUM Supplement 1 is applied. That Supplement uses a Monte Carlo method to provide a sample composed of many values drawn randomly from the probability distribution for the measurand. Such a sample does not constitute a convenient way of communicating knowledge about the measurand. In this paper consideration is given to obtaining a more compact summary of such a sample that preserves information about the measurand contained in the sample and can be used in a subsequent uncertainty evaluation. In particular, a coverage interval for the measurand that corresponds to a given coverage probability is often required. If the measurand is characterized by a probability distribution that is not close to being Gaussian, sufficient information has to be conveyed to enable such a coverage interval to be computed reliably. A quantile function in the form of an extended lambda distribution can provide adequate approximations in a number of cases. This distribution is defined by a fixed number of adjustable parameters determined, for example, by matching the moments of the distribution to those calculated in terms of the sample of values. In this paper, alternative flexible models for the quantile function and methods for determining a quantile function from a sample of values are proposed for meeting the above needs.

Harris, P. M.; Matthews, C. E.; Cox, M. G.; Forbes, A. B.

2014-06-01

290

Four different probabilistic risk assessment methods were compared using the data from the Sangamo Weston\\/Lake Hartwell Superfund site. These were one-dimensional Monte Carlo, two-dimensional Monte Carlo considering uncertainty in the concentration term, two-dimensional Monte Carlo considering uncertainty in ingestion rate, and microexposure event analysis. Estimated high-end risks ranged from 2.0×10 to 3.3×10. Microexposure event analysis produced a lower risk estimate

Ted W. Simon

1999-01-01

291

NASA Astrophysics Data System (ADS)

A method of modelling the dynamic motion of multileaf collimators (MLCs) for intensity-modulated radiation therapy (IMRT) was developed and implemented into the Monte Carlo simulation. The simulation of the dynamic MLCs (DMLCs) was based on randomizing leaf positions during a simulation so that the number of particle histories being simulated for each possible leaf position was proportional to the monitor units delivered to that position. This approach was incorporated into an EGS4 Monte Carlo program, and was evaluated in simulating the DMLCs for Varian accelerators (Varian Medical Systems, Palo Alto, CA, USA). The MU index of each segment, which was specified in the DMLC-control data, was used to compute the cumulative probability distribution function (CPDF) for the leaf positions. This CPDF was then used to sample the leaf positions during a real-time simulation, which allowed for either the step-shoot or sweeping-leaf motion in the beam delivery. Dose intensity maps for IMRT fields were computed using the above Monte Carlo method, with its accuracy verified by film measurements. The DMLC simulation improved the operational efficiency by eliminating the need to simulate multiple segments individually. More importantly, the dynamic motion of the leaves could be simulated more faithfully by using the above leaf-position sampling technique in the Monte Carlo simulation.

Liu, H. Helen; Verhaegen, Frank; Dong, Lei

2001-09-01

292

NASA Astrophysics Data System (ADS)

Portal monitoring radiation detectors are commonly used by steel industries in the probing and detection of radioactivity contamination in scrap metal. These portal monitors typically consist of polystyrene or polyvinyltoluene (PVT) plastic scintillating detectors, one or more photomultiplier tubes (PMT), an electronic circuit, a controller that handles data output and manipulation linking the system to a display or a computer with appropriate software and usually, a light guide. Such a portal used by the steel industry was opened and all principal materials were simulated using a Monte Carlo simulation tool (MCNP4C2). Various source-detector configurations were simulated and validated by comparison with corresponding measurements. Subsequently an experiment with a uniform cargo along with two sets of experiments with different scrap loads and radioactive sources ( 137Cs, 152Eu) were performed and simulated. Simulated and measured results suggested that the nature of scrap is crucial when simulating scrap load-detector experiments. Using the same simulating configuration, a series of runs were performed in order to estimate minimum alarm activities for 137Cs, 60Co and 192Ir sources for various simulated scrap densities. The minimum alarm activities as well as the positions in which they were recorded are presented and discussed.

Takoudis, G.; Xanthos, S.; Clouvas, A.; Potiriadis, C.

2010-02-01

293

Calculation of ? quanta passage through substance with Monte-Carlo method at x-ray images simulation

NASA Astrophysics Data System (ADS)

Software, developed in RFNC-VNIIEF for x-ray images simulation using Monte-Carlo method is described. The software is a part of an x-ray method used for investigation of an equation of state (in this case hydrogen isotopes: protium and deuterium) in a megabar pressure range. Interaction of ?-quanta with a substance is considered. Effect of a scattered radiation on the x-ray images formation is estimated.

Boriskov, G. V.; Bykov, A. I.; Volodko, A. R.; Egorov, N. I.; Pavlov, V. N.; Ronzhin, A. B.

2008-07-01

294

Effective Potential Method in Path-Integral Monte Carlo Calculation and Application to 4He at T<=4 K

The effective potential method is proposed in the path integral Monte Carlo calculation for many-body system interacting with two-body potential. This method is formulated for bosons, fermions and distinguishable particles. There are three types of effective potentials in accordance with statistics of particles. This is applied to liquid 4He at the density 0.03626 mol\\/cm3 and lambda transition is reproduced in

Minoru Takahashi

1986-01-01

295

NASA Astrophysics Data System (ADS)

The estimation of groundwater age has received increasing attention due to its applications in assessing the sustainability of water withdrawal from the aquifers and evaluating the vulnerability of groundwater resources to near surface or recharge water contamination. In most of the works done in the past, whether a single or multiple tracers used for groundwater dating, the uncertainties in observed concentrations of the tracers and their decay rate constants have been neglected. Furthermore, tracers have been assumed to move at the same speed as the groundwater. In reality some of the radio-tracers or anthropogenic chemicals used for groundwater dating might undergo adsorption and desorption and move with a slower velocity than the groundwater. Also there are uncertainties in the decay rates of synthetic chemicals such as CFCs commonly used for groundwater dating. In this presentation development of a Bayesian modeling approach using Markov Chain Monte Carlo method for estimation of age distribution is described. The model considers the uncertainties in the measured tracer concentrations as well as the parameters affecting the concentration of tracers in the groundwater and provides the frequency distributions of the parameters defining the groundwater age distribution. The model also incorporates the effect of the contribution of dissolution of aquifer minerals in diluting the 14C signature and the uncertainties associated with this process on inferred age distribution parameters. The results of application of the method to data collected at Laselva Biological Station - Costa Rica will also be presented. In this demonstration application, eight different forms of presumed groundwater age distributions have been tested including four single-peak forms and four double-peaked forms assuming the groundwater consisting distinct young and old fractions. The performance of these presumed groundwater age forms have been evaluated in terms of their capability in predicting tracer concentration close to the observed values and also the level of certainty they provide in estimation of the age-distribution of parameters. The schematic of the hypothetical 2D (vertical) aquifer model

Massoudieh, A.; Sharifi, S.; Solomon, K.

2012-12-01

296

NASA Astrophysics Data System (ADS)

Our study aim to design a useful neutron signature characterization device based on 3He detectors, a standard neutron detection methodology used in homeland security applications. Research work involved simulation of the generation, transport, and detection of the leakage radiation from Special Nuclear Materials (SNM). To accomplish research goals, we use a new methodology to fully characterize a standard "1-Ci" Plutonium-Beryllium (Pu-Be) neutron source based on 3-D computational radiation transport methods, employing both deterministic SN and Monte Carlo methodologies. Computational model findings were subsequently validated through experimental measurements. Achieved results allowed us to design, build, and laboratory-test a Nickel composite alloy shield that enables the neutron leakage spectrum from a standard Pu-Be source to be transformed, through neutron scattering interactions in the shield, into a very close approximation of the neutron spectrum leaking from a large, subcritical mass of Weapons Grade Plutonium (WGPu) metal. This source will make possible testing with a nearly exact reproduction of the neutron spectrum from a 6.67 kg WGPu mass equivalent, but without the expense or risk of testing detector components with real materials. Moreover, over thirty moderator materials were studied in order to characterize their neutron energy filtering potential. Specific focus was made to establish the limits of He-3 spectroscopy using ideal filter materials. To demonstrate our methodology, we present the optimally detected spectral differences between SNM materials (Plutonium and Uranium), metal and oxide, using ideal filter materials. Finally, using knowledge gained from previous studies, the design of a He-3 spectroscopy system neutron detector, simulated entirely via computational methods, is proposed to resolve the spectra from SNM neutron sources of high interest. This was accomplished by replacing ideal filters with real materials, and comparing reaction rates with similar data from the ideal material suite.

Ghita, Gabriel M.

297

The Unified Monte Carlo method (UMC) has been suggested to avoid certain limitations and approximations inherent to the well-known Generalized Least Squares (GLS) method of nuclear data evaluation. This contribution reports on an investigation of the performance of the UMC method in comparison with the GLS method. This is accomplished by applying both methods to simple examples with few input values that were selected to explore various features of the evaluation process that impact upon the quality of an evaluation. Among the issues explored are: i) convergence of UMC results with the number of Monte Carlo histories and the ranges of sampled values; ii) a comparison of Monte Carlo sampling using the Metropolis scheme and a brute force approach; iii) the effects of large data discrepancies; iv) the effects of large data uncertainties; v) the effects of strong or weak model or experimental data correlations; and vi) the impact of ratio data and integral data. Comparisons are also made of the evaluated results for these examples when the input values are first transformed to comparable logarithmic values prior to performing the evaluation. Some general conclusions that are applicable to more realistic evaluation exercises are offered.

Capote, Roberto [Nuclear Data Section, International Atomic Energy Agency, P.O. Box 100, Wagramer Strasse 5, A-1400 Vienna (Austria)], E-mail: Roberto.CapoteNoy@iaea.org; Smith, Donald L. [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, California 92118-3073 (United States)

2008-12-15

298

Simulating the proton transfer in gramicidin A by a sequential dynamical Monte Carlo method.

The large interest in long-range proton transfer in biomolecules is triggered by its importance for many biochemical processes such as biological energy transduction and drug detoxification. Since long-range proton transfer occurs on a microsecond time scale, simulating this process on a molecular level is still a challenging task and not possible with standard simulation methods. In general, the dynamics of a reactive system can be described by a master equation. A natural way to describe long-range charge transfer in biomolecules is to decompose the process into elementary steps which are transitions between microstates. Each microstate has a defined protonation pattern. Although such a master equation can in principle be solved analytically, it is often too demanding to solve this equation because of the large number of microstates. In this paper, we describe a new method which solves the master equation by a sequential dynamical Monte Carlo algorithm. Starting from one microstate, the evolution of the system is simulated as a stochastic process. The energetic parameters required for these simulations are determined by continuum electrostatic calculations. We apply this method to simulate the proton transfer through gramicidin A, a transmembrane proton channel, in dependence on the applied membrane potential and the pH value of the solution. As elementary steps in our reaction, we consider proton uptake and release, proton transfer along a hydrogen bond, and rotations of water molecules that constitute a proton wire through the channel. A simulation of 8 mus length took about 5 min on an Intel Pentium 4 CPU with 3.2 GHz. We obtained good agreement with experimental data for the proton flux through gramicidin A over a wide range of pH values and membrane potentials. We find that proton desolvation as well as water rotations are equally important for the proton transfer through gramicidin A at physiological membrane potentials. Our method allows to simulate long-range charge transfer in biological systems at time scales, which are not accessible by other methods. PMID:18826179

Till, Mirco S; Essigke, Timm; Becker, Torsten; Ullmann, G Matthias

2008-10-23

299

Geometrically-compatible 3-D Monte Carlo and discrete-ordinates methods

This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project was two-fold. The first purpose was to develop a deterministic discrete-ordinates neutral-particle transport scheme for unstructured tetrahedral spatial meshes, and implement it in a computer code. The second purpose was to modify the MCNP Monte Carlo radiation transport code to use adjoint solutions from the tetrahedral-mesh discrete-ordinates code to reduce the statistical variance of Monte Carlo solutions via a weight-window approach. The first task has resulted in a deterministic transport code that is much more efficient for modeling complex 3-D geometries than any previously existing deterministic code. The second task has resulted in a powerful new capability for dramatically reducing the cost of difficult 3-D Monte Carlo calculations.

Morel, J.E.; Wareing, T.A.; McGhee, J.M.; Evans, T.M.

1998-12-31

300

Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods

NASA Astrophysics Data System (ADS)

During extreme-Mach number reentry into Earth's atmosphere, spacecraft experience hypersonic non-equilibrium flow conditions that dissociate molecules and ionize atoms. Such situations occur behind a shock wave leading to high temperatures, which have an adverse effect on the thermal protection system and radar communications. Since the electronic energy levels of gaseous species are strongly excited for high Mach number conditions, the radiative contribution to the total heat load can be significant. In addition, radiative heat source within the shock layer may affect the internal energy distribution of dissociated and weakly ionized gas species and the number density of ablative species released from the surface of vehicles. Due to the radiation total heat load to the heat shield surface of the vehicle may be altered beyond mission tolerances. Therefore, in the design process of spacecrafts the effect of radiation must be considered and radiation analyses coupled with flow solvers have to be implemented to improve the reliability during the vehicle design stage. To perform the first stage for radiation analyses coupled with gas-dynamics, efficient databasing schemes for emission and absorption coefficients were developed to model radiation from hypersonic, non-equilibrium flows. For bound-bound transitions, spectral information including the line-center wavelength and assembled parameters for efficient calculations of emission and absorption coefficients are stored for typical air plasma species. Since the flow is non-equilibrium, a rate equation approach including both collisional and radiatively induced transitions was used to calculate the electronic state populations, assuming quasi-steady-state (QSS). The Voigt line shape function was assumed for modeling the line broadening effect. The accuracy and efficiency of the databasing scheme was examined by comparing results of the databasing scheme with those of NEQAIR for the Stardust flowfield. An accuracy of approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking into account non-local radiation. The definition of the escape factor which is depende

Sohn, Ilyoup

301

During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.

Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Bin; Wang, Lin; Peng, Kuan; Liang, Jimin; Tian, Jie

2010-01-01

302

Neoclassical electron transport calculation by using {delta}f Monte Carlo method

High electron temperature plasmas with steep temperature gradient in the core are obtained in recent experiments in the Large Helical Device [A. Komori et al., Fusion Sci. Technol. 58, 1 (2010)]. Such plasmas are called core electron-root confinement (CERC) and have attracted much attention. In typical CERC plasmas, the radial electric field shows a transition phenomenon from a small negative value (ion root) to a large positive value (electron root) and the radial electric field in helical plasmas are determined dominantly by the ambipolar condition of neoclassical particle flux. To investigate such plasmas' neoclassical transport precisely, the numerical neoclassical transport code, FORTEC-3D [S. Satake et al., J. Plasma Fusion Res. 1, 002 (2006)], which solves drift kinetic equation based on {delta}f Monte Carlo method and has been applied for ion species so far, is extended to treat electron neoclassical transport. To check the validity of our new FORTEC-3D code, benchmark calculations are carried out with GSRAKE [C. D. Beidler et al., Plasma Phys. Controlled Fusion 43, 1131 (2001)] and DCOM/NNW [A. Wakasa et al., Jpn. J. Appl. Phys. 46, 1157 (2007)] codes which calculate neoclassical transport using certain approximations. The benchmark calculation shows a good agreement among FORTEC-3D, GSRAKE and DCOM/NNW codes for a low temperature (T{sub e}(0)=1.0 keV) plasma. It is also confirmed that finite orbit width effect included in FORTEC-3D affects little neoclassical transport even for the low collisionality plasma if the plasma is at the low temperature. However, for a higher temperature (5 keV at the core) plasma, significant difference arises among FORTEC-3D, GSRAKE, and DCOM/NNW. These results show an importance to evaluate electron neoclassical transport by solving the kinetic equation rigorously including effect of finite radial drift for high electron temperature plasmas.

Matsuoka, Seikichi [Graduate University for Advanced Studies (SOKENDAI), Toki 509-5292 (Japan); Satake, Shinsuke; Yokoyama, Masayuki [Graduate University for Advanced Studies (SOKENDAI), Toki 509-5292 (Japan); National Institute for Fusion Science, Toki 509-5292 (Japan); Wakasa, Arimitsu; Murakami, Sadayoshi [Department of Nuclear Engineering, Kyoto University, Kyoto 606-8501 (Japan)

2011-03-15

303

Estimation of gamma- and X-ray photons buildup factor in soft tissue with Monte Carlo method.

Buildup factor of gamma- and X-ray photons in the energy range 0.2-2MeV in water and soft tissue is computed using Monte Carlo method. The results are compared with the existing buildup factor data of pure water. The difference between soft tissue and water buildup factor is studied. Soft tissue is assumed to have a composition as H(63)C(6)O(28)N. The importance of such work arises from the fact that in medical applications of X- and gamma-ray, soft tissue is usually approximated by water. It is shown that the difference between water and soft tissue buildup factor is usually more than 10%. On the other hand, buildup factor in water resulted from Monte Carlo method is compared with the experimental data appearing in references. It seems there is around 10% error in the reference data as well. PMID:19362488

Sardari, Dariush; Abbaspour, Ali; Baradaran, Samaneh; Babapour, Farshid

2009-01-01

304

NASA Astrophysics Data System (ADS)

We study a simulation of spectral reflectance in human skin tissue using ray-tracing software and the Monte Carlo method on the basis of a graphics processing unit (GPU). An analysis of light propagation using ray-tracing software has several advantages in that it can readily reproduce the complex structure of skin tissue, such as grooves of the skin surface or the boundaries of skin tissue layers, and perform optical simulation with optical elements close to those in a real experiment using only the ray-tracing software. Meanwhile, it has a serious disadvantage in that the simulation time is extremely long because the algorithm is CPU-based. To overcome this disadvantage, we propose a simulation method using the ray-tracing software and a GPU-based Monte Carlo simulation (MCS). The results of the simulation are shown and discussed.

Funamizu, Hideki; Maeda, Takaaki; Sasaki, Shoko; Nishidate, Izumi; Aizu, Yoshihisa

2014-05-01

305

The TSUNAMI computational sequences currently in the SCALE 5 code system provide an automated approach to performing sensitivity and uncertainty analysis for eigenvalue responses, using either one-dimensional discrete ordinates or three-dimensional Monte Carlo methods. This capability has recently been expanded to address eigenvalue-difference responses such as reactivity changes. This paper describes the methodology and presents results obtained for an example advanced CANDU reactor design. (authors)

Williams, M. L.; Gehin, J. C.; Clarno, K. T. [Oak Ridge National Laboratory, Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)

2006-07-01

306

Estimation of gamma- and X-ray photons buildup factor in soft tissue with Monte Carlo method

Buildup factor of gamma- and X-ray photons in the energy range 0.2–2MeV in water and soft tissue is computed using Monte Carlo method. The results are compared with the existing buildup factor data of pure water. The difference between soft tissue and water buildup factor is studied. Soft tissue is assumed to have a composition as H63C6O28N. The importance of

Dariush Sardari; Ali Abbaspour; Samaneh Baradaran; Farshid Babapour

2009-01-01

307

Publication and citation data are used to analyse the dynamics of the theoretical highenergy physics specialty Monte Carlo methods in lattice field theory. The present study is based on a comprehensive bibliography of the given subject area for the six-year period 1979–1984 and the 1979–1985 citations to these papers. The application of a recently introduced set of scientometric indicators provides

H.-J. Czerwon

1990-01-01

308

Quantum Monte Carlo method applied to non-Markovian barrier transmission

In nuclear fusion and fission, fluctuation and dissipation arise because of the coupling of collective degrees of freedom with internal excitations. Close to the barrier, quantum, statistical, and non-Markovian effects are expected to be important. In this work, a new approach based on quantum Monte Carlo addressing this problem is presented. The exact dynamics of a system coupled to an

Guillaume Hupin; Denis Lacroix

2010-01-01

309

Simulation of excited states and the sign problem in the path integral Monte Carlo method

An approach is presented to compute properties of excited states in path integral Monte Carlo simulations of quantum systems. The approach is based on the introduction of several images of the studied system which have the total wavefunction antisymmetric over permutations of these images, and a simulation of the whole system at low enough temperature. The success of the approach

Alexander P Lyubartsev

2005-01-01

310

Implementation of a Monte Carlo method to model photon conversion for solar cells

A physical model describing different photon conversion mechanisms is presented in the context of photovoltaic applications. To solve the resulting system of equations, a Monte Carlo ray-tracing model is implemented, which takes into account the coupling of the photon transport phenomena to the non-linear rate equations describing luminescence. It also separates the generation of rays from the two very different

C. del Cañizo; I. Tobías; J. Perezbedmar; A. C. Pan; A. Luque

2008-01-01

311

A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain,

K P Keady; P Brantley

2010-01-01

312

Development of CT scanner models for patient organ dose calculations using Monte Carlo methods

There is a serious and growing concern about the CT dose delivered by diagnostic CT examinations or image-guided radiation therapy imaging procedures. To better understand and to accurately quantify radiation dose due to CT imaging, Monte Carlo based CT scanner models are needed. This dissertation describes the development, validation, and application of detailed CT scanner models including a GE LightSpeed

Jianwei Gu

2010-01-01

313

A Straightforward Approach to Markov Chain Monte Carlo Methods for Item Response Models.

ERIC Educational Resources Information Center

Demonstrates Markov chain Monte Carlo (MCMC) techniques that are well-suited to complex models with Item Response Theory (IRT) assumptions. Develops an MCMC methodology that can be routinely implemented to fit normal IRT models, and compares the approach to approaches based on Gibbs sampling. Contains 64 references. (SLD)

Patz, Richard J.; Junker, Brian W.

1999-01-01

314

Quantum Monte Carlo for transition metal systems: Method developments and applications

Quantum Monte Carlo (QMC) is a powerful computational tool to study correlated systems of electrons, allowing us to explicitly treat many-body interactions with favorable scaling in the number of particles. It has been regarded as a benchmark tool for condensed matter systems containing elements from the first and second row of the periodic table. It holds particular promise for the

Lucas K. Wagner

2006-01-01

315

Applications of Monte Carlo Methods to Simulate Gamma Ray Interactions in Si and Ge

A Monte Carlo code is employed to simulate the electron cascade subsequent to a gamma ray interaction in two common semiconductors, silicon and germanium, over the energy range of 50 eV to 2 MeV. The partitioning of the gamma ray energy into the various loss mechanisms determines the performance of the detector, generally parameterized by the average energy to create

Luke Campbell; Fei Gao; Ram Devanathan; Yulong Xie; Anthony J. Peurrung; William J. Webber

2006-01-01

316

Magnetic interpretation by the Monte Carlo method with application to the intrusion of the Crimea

NASA Astrophysics Data System (ADS)

The study involves the application of geophysical methods for geological mapping. Magnetic and radiometric measurements were used to delineate the intrusive bodies in Bakhchysarai region of the Crimea. Proton magnetometers used to measure the total magnetic field in the area and variation station. Scintillation radiometer used to determine the radiation dose. Due to susceptimeter measured the magnetic susceptibility of rocks. It deal with the fact that in this area of research the rock mass appears on the surface. Anomalous values of the magnetic intensity were obtained as the difference between the observed measurements and values on variation station. Through geophysical data were given maps of the anomalous magnetic field, radiation dose, and magnetic susceptibility. Geology of area consisted from magmatic rocks and overlying sedimentary rocks. The main task of research was to study the geometry and the magnetization vector of igneous rocks. Intrusive body composed of diabase and had an average magnetic susceptibility, weak dose rate and negative magnetic field. Sedimentary rocks were represented by clays. They had a low value of the magnetic susceptibility and the average dose rate. Map of magnetic susceptibility gave information about the values and distribution of magnetized bodies close to the surface. These data were used to control and elaboration the data of the magnetic properties for magnetic modelling. Magnetic anomaly map shows the distribution of magnetization in depth. Interpretation profile was located perpendicular to the strike of the intrusive body. Modelling was performed for profile of the magnetic field. Used the approach for filling by rectangular blocks of geological media. The fitting implemented for value magnetization and its vector for ever block. Fitting was carried out using the Monte Carlo method in layers from the bottom to top. After passing through all the blocks were fixed magnetic parameters of the block with the best approximation between the theoretical and observed fields i.e. object function. It was first iteration. The next iteration begins with this block. If after next access through blocks was not reduce the objective function is carried out with the passage of the last block as in the first iteration. This technique worked well for separate synthetic models. As result was obtained the geometric boundaries of geological objects. Igneous rocks are nearly vertical magnetization with respect to the current field. Perhaps, this is because the Jurassic diabase at its formation frozen in time when the magnetic poles have opposite signs in comparison to the modern magnetic field. Due to the magnetic modelling obtained geological section that consistent with geological concept.

Gryshchuk, Pavlo

2014-05-01

317

Current impulse response of thin InP p+-i-n+ diodes using full band structure Monte Carlo method

NASA Astrophysics Data System (ADS)

A random response time model to compute the statistics of the avalanche buildup time of double-carrier multiplication in avalanche photodiodes (APDs) using full band structure Monte Carlo (FBMC) method is discussed. The effect of feedback impact ionization process and the dead-space effect on random response time are included in order to simulate the speed of APD. The time response of InP p+-i-n+ diodes with the multiplication region of 0.2 ?m is presented. Finally, the FBMC model is used to calculate the current impulse response of the thin InP p+-i-n+ diodes with multiplication lengths of 0.05 and 0.2 ?m using Ramo's theorem [Proc. IRE 27, 584 (1939)]. The simulated current impulse response of the FBMC model is compared to the results simulated from a simple Monte Carlo model.

You, A. H.; Cheang, P. L.

2007-02-01

318

Development of CT scanner models for patient organ dose calculations using Monte Carlo methods

NASA Astrophysics Data System (ADS)

There is a serious and growing concern about the CT dose delivered by diagnostic CT examinations or image-guided radiation therapy imaging procedures. To better understand and to accurately quantify radiation dose due to CT imaging, Monte Carlo based CT scanner models are needed. This dissertation describes the development, validation, and application of detailed CT scanner models including a GE LightSpeed 16 MDCT scanner and two image guided radiation therapy (IGRT) cone beam CT (CBCT) scanners, kV CBCT and MV CBCT. The modeling process considered the energy spectrum, beam geometry and movement, and bowtie filter (BTF). The methodology of validating the scanner models using reported CTDI values was also developed and implemented. Finally, the organ doses to different patients undergoing CT scan were obtained by integrating the CT scanner models with anatomically-realistic patient phantoms. The tube current modulation (TCM) technique was also investigated for dose reduction. It was found that for RPI-AM, thyroid, kidneys and thymus received largest dose of 13.05, 11.41 and 11.56 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. For RPI-AF, thymus, small intestine and kidneys received largest dose of 10.28, 12.08 and 11.35 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. The dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. For MDCT with TCM schemas, the fetal dose can be reduced with 14%-25%. To demonstrate the applicability of the method proposed in this dissertation for modeling the CT scanner, additional MDCT scanner was modeled and validated by using the measured CTDI values. These results demonstrated that the CT scanner models in this dissertation were versatile and accurate tools for estimating dose to different patient phantoms undergoing various CT procedures. The organ doses from kV and MV CBCT were also calculated. This dissertation finally summarizes areas where future research can be performed including MV CBCT further validation and application, dose reporting software and image and dose correlation study.

Gu, Jianwei

319

Application of polynomial-expansion Monte Carlo method to a spin-ice Kondo lattice model

NASA Astrophysics Data System (ADS)

We present the results of Monte Carlo simulation for a Kondo lattice model in which itinerant electrons interact with Ising spins with spin-ice type easy-axis anisotropy on a pyrochlore lattice. We demonstrate the efficiency of the truncated polynomial expansion algorithm, which enables a large scale simulation, in comparison with a conventional algorithm using the exact diagonalization. Computing the sublattice magnetization, we show the convergence of the data with increasing the number of polynomials and truncation distance.

Ishizuka, Hiroaki; Udagawa, Masafumi; Motome, Yukitoshi

2012-12-01

320

Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations

A computationally simple way to accommodate "basins" of trapping states in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.

Van Siclen, Clinton D

2007-02-01

321

A maximum likelihood method for linking particle-in-cell and Monte-Carlo transport simulations

NASA Astrophysics Data System (ADS)

The expectation-maximization (E-M) algorithm [Dempster et al., J. R. Stat. Soc. B 39 (1977) 1-38] is a maximum likelihood technique to estimate the probability density function (PDF) of a set of measurements. A high performance implementation of the E-M algorithm to characterize multidimensional data sets using a PDF parameterized as a Gaussian mixture was developed. The resulting PDFs compare favorably to histogram based techniques—no binning artifacts and less noisy (especially in the tails). The motivation, the mathematical properties and the implementation details will be discussed. The PDF estimator is used extensively in the radiographic chain model [Kwan et al., Comput. Phys. Comm. 142 (2001) 263-269] in simulations which quantify bremsstrahlung X-ray emission from rod-pinch diodes and other devices. In these devices, electrons hit an anode and produce X-ray photons. The PIC code MERLIN [Kwan and Snell, in: Lecture Notes in Physics, Springer, 1985] is used to model the dynamics of a low-energy (up to ˜2.25 MeV) radiographic electron source. The photon production is modeled with the Monte-Carlo transport code MCNP [Briesmeister, ed., MCNP—A General Monte Carlo N-Particle Transport Code, 2000]. The estimator is used to upsample and uniformly weight the PIC electrons to provide a suitable population for the Monte-Carlo calculation that would be computationally prohibitive to generate directly.

Bowers, Kevin J.; Devolder, Barbara G.; Yin, Lin; Kwan, Thomas J. T.

2004-12-01

322

A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

NASA Astrophysics Data System (ADS)

Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.

Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.

2014-06-01

323

We study the efficiency of quantum Monte Carlo (QMC) methods in computing space localized ground state properties (properties which do not depend on distant degrees of freedom) as a function of the system size N. We prove that for the commonly used correlated sampling with reweighting method, the statistical fluctuations ?2(N) do not obey the locality property. ?2(N) grow at least linearly with N and with a slope that is related to the fluctuations of the reweighting factors. We provide numerical illustrations of these tendencies in the form of QMC calculations on linear chains of hydrogen atoms. PMID:24730964

Assaraf, Roland; Domin, Dominik

2014-03-01

324

In the present paper we identify a rigorous property of a number of tempering-based Monte Carlo sampling methods, including parallel tempering as well as partial and infinite swapping. Based on this property we develop a variety of performance measures for such rare-event sampling methods that are broadly applicable, informative, and straightforward to implement. We illustrate the use of these performance measures with a series of applications involving the equilibrium properties of simple Lennard-Jones clusters, applications for which the performance levels of partial and infinite swapping approaches are found to be higher than those of conventional parallel tempering. PMID:23205986

Doll, J D; Plattner, Nuria; Freeman, David L; Liu, Yufei; Dupuis, Paul

2012-11-28

325

NASA Astrophysics Data System (ADS)

Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is unclear to us. We assume the proportion between Hc and Ht as a random number in the interval from 2.0 to 3.0. De Wit's sphere leaf angle distribution function was used within the canopy for acceleration. Finally, we simulate the open forest albedo using MCRT method. The MCRT algorithm of this study is summarized as follows (1) Initialize the photon with a position (r0), source direction (?0) and intensity (I0), respectively. (2) Simulate the free path (s) of a photon under the condition of (r', ?, I') in the canopy. (3) Calculate the new position of the photon r=r +s?'. (4) Determine the new scattering direction (?)after collision at, r and then calculate the new intensity I = ?L(?L,?'-->?)I'.(5) Accumulate the intensity I of a photon escaping from the top boundary of the 3-D Scene, otherwise redo from step (2), until I is smaller than a threshold. (6) Repeat from step (1), for each photon. We testify the model on four different simulated open forests and the effectiveness of the model is demonstrated in details.

Jin, Shengye; Tamura, Masayuki

2013-10-01

326

NASA Astrophysics Data System (ADS)

In subcritical systems driven by an external neutron source, the experimental methods based on pulsed neutron source and statistical techniques play an important role for reactivity measurement. Simulation of these methods is very time-consumed procedure. For simulations in Monte-Carlo programs several improvements for neutronic calculations have been made. This paper introduces a new method for simulation PNS and statistical measurements. In this method all events occurred in the detector during simulation are stored in a file using PTRAC feature in the MCNP. After that with a special code (or post-processing) PNS and statistical methods can be simulated. Additionally different shapes of neutron pulses and its lengths as well as dead time of detectors can be included into simulation. The methods described above were tested on subcritical assembly Yalina-Thermal, located in Joint Institute for Power and Nuclear Research SOSNY, Minsk, Belarus. A good agreement between experimental and simulated results was shown.

Sadovich, Sergey; Talamo, A.; Burnos, V.; Kiyavitskaya, H.; Fokov, Yu.

2014-06-01

327

This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

Brown, F.B.; Sutton, T.M.

1996-02-01

328

NASA Astrophysics Data System (ADS)

The cavity method is a well-established technique for solving classical spin models on sparse random graphs (mean-field models with finite connectivity). Laumann [Phys. Rev. B 78, 134424 (2008)] proposed recently an extension of this method to quantum spin-1/2 models in a transverse field, using a discretized Suzuki-Trotter imaginary-time formalism. Here we show how to take analytically the continuous imaginary-time limit. Our main technical contribution is an explicit procedure to generate the spin trajectories in a path-integral representation of the imaginary-time dynamics. As a side result we also show how this procedure can be used in simple heat bath Monte Carlo simulations of generic quantum spin models. The replica symmetric continuous-time quantum cavity method is formulated for a wide class of models and applied as a simple example on the Bethe lattice ferromagnet in a transverse field. The results of the methods are confronted with various approximation schemes in this particular case. On this system we performed quantum Monte Carlo simulations that confirm the exactness of the cavity method in the thermodynamic limit.

Krzakala, Florent; Rosso, Alberto; Semerjian, Guilhem; Zamponi, Francesco

2008-10-01

329

Loops in proteins are flexible regions connecting regular secondary structures. They are often involved in protein functions through interacting with other molecules. The irregularity and flexibility of loops make their structures difficult to determine experimentally and challenging to model computationally. Conformation sampling and energy evaluation are the two key components in loop modeling. We have developed a new method for loop conformation sampling and prediction based on a chain growth sequential Monte Carlo sampling strategy, called Distance-guided Sequential chain-Growth Monte Carlo (DISGRO). With an energy function designed specifically for loops, our method can efficiently generate high quality loop conformations with low energy that are enriched with near-native loop structures. The average minimum global backbone RMSD for 1,000 conformations of 12-residue loops is 1:53 A° , with a lowest energy RMSD of 2:99 A° , and an average ensembleRMSD of 5:23 A° . A novel geometric criterion is applied to speed up calculations. The computational cost of generating 1,000 conformations for each of the x loops in a benchmark dataset is only about 10 cpu minutes for 12-residue loops, compared to ca 180 cpu minutes using the FALCm method. Test results on benchmark datasets show that DISGRO performs comparably or better than previous successful methods, while requiring far less computing time. DISGRO is especially effective in modeling longer loops (10-17 residues). PMID:24763317

Tang, Ke; Zhang, Jinfeng; Liang, Jie

2014-04-01

330

Loops in proteins are flexible regions connecting regular secondary structures. They are often involved in protein functions through interacting with other molecules. The irregularity and flexibility of loops make their structures difficult to determine experimentally and challenging to model computationally. Conformation sampling and energy evaluation are the two key components in loop modeling. We have developed a new method for loop conformation sampling and prediction based on a chain growth sequential Monte Carlo sampling strategy, called Distance-guided Sequential chain-Growth Monte Carlo (DiSGro). With an energy function designed specifically for loops, our method can efficiently generate high quality loop conformations with low energy that are enriched with near-native loop structures. The average minimum global backbone RMSD for 1,000 conformations of 12-residue loops is Å, with a lowest energy RMSD of Å, and an average ensemble RMSD of Å. A novel geometric criterion is applied to speed up calculations. The computational cost of generating 1,000 conformations for each of the x loops in a benchmark dataset is only about cpu minutes for 12-residue loops, compared to ca cpu minutes using the FALCm method. Test results on benchmark datasets show that DiSGro performs comparably or better than previous successful methods, while requiring far less computing time. DiSGro is especially effective in modeling longer loops (– residues).

Tang, Ke; Zhang, Jinfeng; Liang, Jie

2014-01-01

331

Direct simulation Monte Carlo method for the Uehling-Uhlenbeck-Boltzmann equation

NASA Astrophysics Data System (ADS)

In this paper we describe a direct simulation Monte Carlo algorithm for the Uehling-Uhlenbeck-Boltzmann equation in terms of Markov processes. This provides a unifying framework for both the classical Boltzmann case as well as the Fermi-Dirac and Bose-Einstein cases. We establish the foundation of the algorithm by demonstrating its link to the kinetic equation. By numerical experiments we study its sensitivity to the number of simulation particles and to the discretization of the velocity space, when approximating the steady-state distribution.

Garcia, Alejandro L.; Wagner, Wolfgang

2003-11-01

332

Building on the Markov chain formalism for scalar (intensity only) radiative transfer, this paper formulates the solution to polarized diffuse reflection from and transmission through a vertically inhomogeneous atmosphere. For verification, numerical results are compared to those obtained by the Monte Carlo method, showing deviations less than 1% when 90 streams are used to compute the radiation from two types of atmospheres, pure Rayleigh and Rayleigh plus aerosol, when they are divided into sublayers of optical thicknesses of less than 0.03. PMID:21263634

Xu, Feng; Davis, Anthony B; West, Robert A; Esposito, Larry W

2011-01-17

333

NASA Astrophysics Data System (ADS)

The talk examines a system of pairwise interaction particles, which models a rarefied gas in accordance with the nonlinear Boltzmann equation, the master equations of Markov evolution of this system and corresponding numerical Monte Carlo methods. Selection of some optimal method for simulation of rarefied gas dynamics depends on the spatial size of the gas flow domain. For problems with the Knudsen number Kn of order unity "imitation", or "continuous time", Monte Carlo methods ([2]) are quite adequate and competitive. However if Kn <= 0.1 (the large sizes), excessive punctuality, namely, the need to see all the pairs of particles in the latter, leads to a significant increase in computational cost(complexity). We are interested in to construct the optimal methods for Boltzmann equation problems with large enough spatial sizes of the flow. Speaking of the optimal, we mean that we are talking about algorithms for parallel computation to be implemented on high-performance multi-processor computers. The characteristic property of large systems is the weak dependence of sub-parts of each other at a sufficiently small time intervals. This property is taken into account in the approximate methods using various splittings of operator of corresponding master equations. In the paper, we develop the approximate method based on the splitting of the operator of master equations system "over groups of particles" ([7]). The essence of the method is that the system of particles is divided into spatial subparts which are modeled independently for small intervals of time, using the precise"imitation" method. The type of splitting used is different from other well-known type "over collisions and displacements", which is an attribute of the known Direct simulation Monte Carlo methods. The second attribute of the last ones is the grid of the "interaction cells", which is completely absent in the imitation methods. The main of talk is parallelization of the imitation algorithms with splitting using the MPI library. New constructed algorithms are applied to solve the problems: on propagation of the temperature discontinuity and on plane Poiseuille flow in the field of external forces. In particular, on the basis of numerical solutions, comparative estimates of the computational cost are given for all algorithms under consideration.

Khisamutdinov, A. I.; Velker, N. N.

2014-05-01

334

Many experimental systems consist of large ensembles of uncoupled or weakly interacting elements operating as a single whole; this is particularly the case for applications in nano-optics and plasmonics, including colloidal solutions, plasmonic or dielectric nanoparticles on a substrate, antenna arrays, and others. In such experiments, measurements of the optical spectra of ensembles will differ from measurements of the independent elements as a result of small variations from element to element (also known as polydispersity) even if these elements are designed to be identical. In particular, sharp spectral features arising from narrow-band resonances will tend to appear broader and can even be washed out completely. Here, we explore this effect of inhomogeneous broadening as it occurs in colloidal nanopolymers comprising self-assembled nanorod chains in solution. Using a technique combining finite-difference time-domain simulations and Monte Carlo sampling, we predict the inhomogeneously broadened optical spectra of these colloidal nanopolymers and observe significant qualitative differences compared with the unbroadened spectra. The approach combining an electromagnetic simulation technique with Monte Carlo sampling is widely applicable for quantifying the effects of inhomogeneous broadening in a variety of physical systems, including those with many degrees of freedom that are otherwise computationally intractable. PMID:24469797

Gudjonson, Herman; Kats, Mikhail A; Liu, Kun; Nie, Zhihong; Kumacheva, Eugenia; Capasso, Federico

2014-02-11

335

This paper presents a new bronchoscope motion tracking method that utilizes manifold modeling and sequential Monte Carlo (SMC) sampler to boost navigated bronchoscopy. Our strategy to estimate the bronchoscope motions comprises two main stages: (1) bronchoscopic scene identification and (2) SMC sampling. We extend a spatial local and global regressive mapping (LGRM) method to Spatial-LGRM to learn bronchoscopic video sequences and construct their manifolds. By these manifolds, we can classify bronchoscopic scenes to bronchial branches where a bronchoscope is located. Next, we employ a SMC sampler based on a selective image similarity measure to integrate estimates of stage (1) to refine positions and orientations of a bronchoscope. Our proposed method was validated on patient datasets. Experimental results demonstrate the effectiveness and robustness of our method for bronchoscopic navigation without an additional position sensor. PMID:22003706

Luo, Xiongbiao; Kitasaka, Takayuki; Mori, Kensaku

2011-01-01

336

Monte Carlo method was used to simulate the correction factors for electron loss and scattered photons for two improved cylindrical free-air ionization chambers (FACs) constructed at the Institute of Nuclear Energy Research (INER, Taiwan). The method is based on weighting correction factors for mono-energetic photons with X-ray spectra. The newly obtained correction factors for the medium-energy free-air chamber were compared with the current values, which were based on a least-squares fit to experimental data published in the NBS Handbook 64 [Wyckoff, H.O., Attix, F.H., 1969. Design of free-air ionization chambers. National Bureau Standards Handbook, No. 64. US Government Printing Office, Washington, DC, pp. 1-16; Chen, W.L., Su, S.H., Su, L.L., Hwang, W.S., 1999. Improved free-air ionization chamber for the measurement of X-rays. Metrologia 36, 19-24]. The comparison results showed the agreement between the Monte Carlo method and experimental data is within 0.22%. In addition, mono-energetic correction factors for the low-energy free-air chamber were calculated. Average correction factors were then derived for measured and theoretical X-ray spectra at 30-50 kVp. Although the measured and calculated spectra differ slightly, the resulting differences in the derived correction factors are less than 0.02%. PMID:16427292

Lin, Uei-Tyng; Chu, Chien-Hau

2006-05-01

337

This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. A alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.

Martin, W.R.

1993-01-01

338

For the evaluation of gamma-ray dose rates around the duct penetrations after shutdown of nuclear fusion reactor, the calculation method is proposed with an application of the Monte Carlo neutron and decay gamma-ray transport calculation. For the radioisotope production rates during operation, the Monte Carlo calculation is conducted by the modification of the nuclear data library replacing a prompt gamma-ray

Satoshi SATO; Hiromasa IIDA; Takeo NISHITANI

2002-01-01

339

Exact ground state Monte Carlo method for Bosons without importance sampling.

Generally "exact" quantum Monte Carlo computations for the ground state of many bosons make use of importance sampling. The importance sampling is based either on a guiding function or on an initial variational wave function. Here we investigate the need of importance sampling in the case of path integral ground state (PIGS) Monte Carlo. PIGS is based on a discrete imaginary time evolution of an initial wave function with a nonzero overlap with the ground state, which gives rise to a discrete path which is sampled via a Metropolis-like algorithm. In principle the exact ground state is reached in the limit of an infinite imaginary time evolution, but actual computations are based on finite time evolutions and the question is whether such computations give unbiased exact results. We have studied bulk liquid and solid (4)He with PIGS by considering as initial wave function a constant, i.e., the ground state of an ideal Bose gas. This implies that the evolution toward the ground state is driven only by the imaginary time propagator, i.e., there is no importance sampling. For both phases we obtain results converging to those obtained by considering the best available variational wave function (the shadow wave function) as initial wave function. Moreover we obtain the same results even by considering wave functions with the wrong correlations, for instance, a wave function of a strongly localized Einstein crystal for the liquid phase. This convergence is true not only for diagonal properties such as the energy, the radial distribution function, and the static structure factor, but also for off-diagonal ones, such as the one-body density matrix. This robustness of PIGS can be traced back to the fact that the chosen initial wave function acts only at the beginning of the path without affecting the imaginary time propagator. From this analysis we conclude that zero temperature PIGS calculations can be as unbiased as those of finite temperature path integral Monte Carlo. On the other hand, a judicious choice of the initial wave function greatly improves the rate of convergence to the exact results. PMID:20568848

Rossi, M; Nava, M; Reatto, L; Galli, D E

2009-10-21

340

Monte Carlo radiative transfer

NASA Astrophysics Data System (ADS)

I outline methods for calculating the solution of Monte Carlo Radiative Transfer (MCRT) in scattering, absorption and emission processes of dust and gas, including polarization. I provide a bibliography of relevant papers on methods with astrophysical applications.

Whitney, B. A.

2011-03-01

341

Monte Carlo radiative transfer

NASA Astrophysics Data System (ADS)

I outline methods for calculating the solution of Monte Carlo Radiative Transfer (MCRT) in scattering, absorption and emission processes of dust and gas, including polarization. I provide a bibliography of relevant papers on methods with astrophysical applications.

Whitney, Barbara A.

2011-12-01

342

The Acceptance Probability of the Hybrid Monte Carlo Method in High-Dimensional Problems

NASA Astrophysics Data System (ADS)

We investigate the properties of the Hybrid Monte-Carlo algorithm in high dimensions. In the simplified scenario of independent, identically distributed components, we prove that, to obtain an G(1) acceptance probability as the dimension d of the state space tends to ?, the Verlet/leap-frog step-size h should be scaled as h = l×d-1/4. We also identify analytically the asymptotically optimal acceptance probability, which turns out to be 0.651 (with three decimal places) this is the choice that optimally balances the cost of generating a proposal, which decreases as l increases, against the cost related to the average number of proposals required to obtain acceptance, which increases as l increases.

Beskos, A.; Pillai, N. S.; Roberts, G. O.; Sanz-Serna, J. M.; Stuart, A. M.

2010-09-01

343

Comparing methods and Monte Carlo algorithms at phase transition regimes: A general overview

NASA Astrophysics Data System (ADS)

Although numerical simulations constitute one of the most important tools in statistical mechanics, in practice the things are not so simple. Standard commonly used algorithms lead to well known difficulties at phase transition regimes, hence avoiding the achievement of precise thermodynamic quantities. In the last years, several approaches have been proposed in order to circumvent such difficulties. With these concepts in mind, here we present a comparison among distinct Monte Carlo algorithms, analyzing their efficiency and reliability. We show that their difficulties are substantially reduced when proper approaches for phase transitions are used. We illustrate the main concepts and ideas in the Blume-Emery-Griffiths (BEG) model, that displays strong first-order transitions in and second-order transitions low and high temperatures, respectively.

Fiore, Carlos E.

2014-03-01

344

Whole-field strain uncertainty evaluation by a Monte Carlo method

NASA Astrophysics Data System (ADS)

In this work, we used a Monte Carlo computer simulation to evaluate the uncertainties of the strains obtained from a displacement field measured by moiré interferometry. The displacements were induced by applying tensile load to a metallic sheet sample. At each point of the illuminated area, the strain standard uncertainty was taken as the standard deviation of the series of outcomes obtained by a large number of strain evaluations. These strain evaluations were performed by differentiating the surfaces that were fitted to the sets of points in the space formed by the two spatial coordinates of the field and the corresponding local displacement. These sets of points were generated according to the probability density functions that we considered appropriate. The reported procedure to evaluate the strain uncertainty is valid independently of the interferometric technique used to measure the displacement field.

Cordero, Raul R.; Roth, Pedro

2004-09-01

345

Modeling of radiation-induced bystander effect using Monte Carlo methods

NASA Astrophysics Data System (ADS)

Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.

Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun

2009-03-01

346

NASA Astrophysics Data System (ADS)

A new technique for evaluating the absolute free energy of large molecules is presented. Quantum-mechanical contributions to the intramolecular torsions are included via the torsional path integral Monte Carlo (TPIMC) technique. Importance sampling schemes based on uncoupled free rotors and harmonic oscillators facilitate the use of the TPIMC technique for the direct evaluation of quantum partition functions. Absolute free energies are calculated for the molecules ethane, n-butane, n-octane, and enkephalin, and quantum contributions are found to be significant. Comparison of the TPIMC technique with the harmonic oscillator approximation and a variational technique is performed for the ethane molecule. For all molecules, the quantum contributions to free energy are found to be significant but slightly smaller than the quantum contributions to internal energy.

Miller, Thomas F.; Clary, David C.

2003-07-01

347

Torsional path integral Monte Carlo method for the quantum simulation of large molecules

NASA Astrophysics Data System (ADS)

A molecular application is introduced for calculating quantum statistical mechanical expectation values of large molecules at nonzero temperatures. The Torsional Path Integral Monte Carlo (TPIMC) technique applies an uncoupled winding number formalism to the torsional degrees of freedom in molecular systems. The internal energy of the molecules ethane, n-butane, n-octane, and enkephalin are calculated at standard temperature using the TPIMC technique and compared to the expectation values obtained using the harmonic oscillator approximation and a variational technique. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the TPIMC technique. The harmonic oscillator approximation approach to calculating the internal energy performs well for the molecules presented in this study but is limited by its neglect of both anharmonicity effects and the potential coupling of intramolecular torsions.

Miller, Thomas F.; Clary, David C.

2002-05-01

348

Study of CANDU thorium-based fuel cycles by deterministic and Monte Carlo methods

In the framework of the Generation IV forum, there is a renewal of interest in self-sustainable thorium fuel cycles applied to various concepts such as Molten Salt Reactors [1, 2] or High Temperature Reactors [3, 4]. Precise evaluations of the U-233 production potential relying on existing reactors such as PWRs [5] or CANDUs [6] are hence necessary. As a consequence of its design (online refueling and D{sub 2}O moderator in a thermal spectrum), the CANDU reactor has moreover an excellent neutron economy and consequently a high fissile conversion ratio [7]. For these reasons, we try here, with a shorter term view, to re-evaluate the economic competitiveness of once-through thorium-based fuel cycles in CANDU [8]. Two simulation tools are used: the deterministic Canadian cell code DRAGON [9] and MURE [10], a C++ tool for reactor evolution calculations based on the Monte Carlo code MCNP [11]. (authors)

Nuttin, A.; Guillemin, P. [LPSC Grenoble ENSPG (France); Courau, T. [EDF R and D Clamart (France); Marleau, G. [Ecole Polytechnique de Montreal (Canada); Meplan, O. [LPSC Grenoble UJF (France); David, S.; Michel-Sendis, F.; Wilson, J. N. [IPN Orsay CNRS (France)

2006-07-01

349

Multi-dimensional impurity transport code by Monte Carlo method including gyro-orbit effects

NASA Astrophysics Data System (ADS)

We are developing a new 3D Monte Carlo transport code 'IMPGYRO' for the analysis of the heavy impurities in fusion edge plasmas. The code directory solves the 3D equations of motion for the test impurity ions to take into account their gyro motion. Most of the important processes, such as the multi-step ionization process and Coulomb scattering, etc., are also included in the model. The results for the prompt redeposition rate of tungsten ions agree well with the analytic results. In addition, 2D density profiles for tungsten ions of each charge state in a simple slab geometry have been calculated for given background plasma profiles typical of a detached plasma state. Although the code is now still under development, these initial results show that the code has potential as a useful tool, not only for the analysis of the prompt redeposition very close to the wall, but also for the analysis of more large scale impurity transport processes.

Hyodo, I.; Hirano, M.; Miyamoto, K.; Hoshino, K.; Hatayama, A.

2003-03-01

350

NASA Astrophysics Data System (ADS)

Using Field II program and Monte-Carlo method, we developed new design tools for medical ultrasound transducers with the automated set of imaging simulations. This simulation environment is used to create and assess the parametric specification of design factors affecting the quality of medical ultrasound imaging. In order to obtain accurate and realistic results, non-ideal transducers whose transfer functions vary either across the transducer or within the transducer are considered. These variations include, but are not limited to: center frequency, bandwidth, sensitivity, ringdown, angular response, time-of-flight, and lateral focus. By applying random numbers within the tolerance range to the variations in input parameters, the automated set of simulations is performed. First, critical input parameters for components of the transfer function are identified by sensitivity analysis. Next, the statistical range of parameter values that yield a transducer model with a certain performance level is determined and the limit of variations in each factor for acceptable degradation of images is set. Finally, the creation of many ``what if'' cases to predict yield and statistical performance of a transducer and the imaging simulation are performed based on Monte-Carlo method. [Work supported by Sound Technology Inc.

Lee, Hotaik; Smith, Nadine B.; Kling, Terry A.

2005-04-01

351

In the present study, NIRS was applied to nondestructive and rapid measurement of firmness and surface color of pear. In order to improve the prediction precision and eliminate the influence of uninformative variables on model robustness, Monte Carlo uninformative variables elimination (MC-UVE) and Monte Carlo uninformative variables elimination based on wavelet transform (WT-MC-UVE) methods were proposed for variable selection in firmness and surface color NIR spectral modeling. Results show that WT-MC-UVE can reduce the modeling variables from 1451 to 210, and get similar prediction results for firmness. WT-MC-UVE improved the prediction precision for surface color, the root mean square error of prediction (RMSEP) and calibration variables were reduced from 1.06 and 1451 to 0.90 and 220 respectively, and the correlation coefficient (r) was improved from 0.975 to 0.981. The proposed method is able to select important wavelength from the NIR spectra, and makes the prediction more robust and accurate in quantitative analysis of firmness and surface color. PMID:21800570

Hao, Yong; Sun, Xu-dong; Pan, Yuan-yuan; Gao, Rong-jie; Liu, Yan-de

2011-05-01

352

NASA Astrophysics Data System (ADS)

We propose a new variational Monte Carlo (VMC) approach based on the Krylov subspace for large-scale shell-model calculations. A random walker in the VMC is formulated with the M-scheme representation, and samples a small number of configurations from a whole Hilbert space stochastically. This VMC framework is demonstrated in the shell-model calculations of 48Cr and 60Zn, and we discuss its relation to a small number of Lanczos iterations. By utilizing the wave function obtained by the conventional particle-hole-excitation truncation as an initial state, this VMC approach provides us with a sequence of systematically improved results.

Shimizu, Noritaka; Mizusaki, Takahiro; Kaneko, Kazunari

2013-06-01

353

A set of multi-group eigenvalue (Keff) benchmark problems in three-dimensional homogenised reactor core configurations have been solved using the deterministic finite element transport theory code EVENT and the Monte Carlo code MCNP4C. The principal aim of this work is to qualify numerical methods and algorithms implemented in EVENT. The benchmark problems were compiled and published by the Nuclear Data Agency

A. K. Ziver; M. S. Shahdatullah; M. D. Eaton; C. R. E. de Oliveira; A. P. Umpleby; C. C. Pain; A. J. H. Goddard

2005-01-01

354

For several years, Monte Carlo burnup/depletion codes have appeared, which couple a Monte Carlo code to simulate the neutron transport to a deterministic method that computes the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3 dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the time-expensive Monte Carlo solver called at each time step. Therefore, great improvements in term of calculation time could be expected if one could get rid of Monte Carlo transport sequences. For example, it may seem interesting to run an initial Monte Carlo simulation only once, for the first time/burnup step, and then to use the concentration perturbation capability of the Monte Carlo code to replace the other time/burnup steps (the different burnup steps are seen like perturbations of the concentrations of the initial burnup step). This paper presents some advantages and limitations of this technique and preliminary results in terms of speed up and figure of merit. Finally, we will detail different possible calculation scheme based on that method. (authors)

Dieudonne, C.; Dumonteil, E.; Malvagi, F.; Diop, C. M. [Commissariat a l'Energie Atomique et aux Energies Alternatives CEA, Service d'Etude des Reacteurs et de Mathematiques Appliquees, DEN/DANS/DM2S/SERMA/LTSD, F91191 Gif-sur-Yvette cedex (France)] [Commissariat a l'Energie Atomique et aux Energies Alternatives CEA, Service d'Etude des Reacteurs et de Mathematiques Appliquees, DEN/DANS/DM2S/SERMA/LTSD, F91191 Gif-sur-Yvette cedex (France)

2013-07-01

355

NASA Astrophysics Data System (ADS)

The development of a spectral domain method of moments code for the modeling of single layer microstrip patch antennas is presented in this thesis. The mixed potential integral equation formulation of Maxwell's equations is used as the theoretical basis for the work, and is solved via the method of moments. General-purpose graphics processing units are used for the computation of the impedance matrix by incorporation of quasi-Monte Carlo integration. The development of the various components of the code, including Green's function, impedance matrix, and excitation vector modules are discussed with individual test cases for the major code modules. The integrated code was tested by modeling a suite of four coaxially probe fed circularly polarized single layer microstrip patch antennas and the computed results are compared to those obtained by measurement. Finally, a study examining the relationship between design parameters and S11 performance was undertaken using the code.

Cerjanic, Alexander M.

356

NASA Astrophysics Data System (ADS)

We present a valence force field (VFF)-based Monte Carlo (MC) bond-rotation method capable of identifying stable sp2-bonded carbon configurations. The VFF contains four parameters that are adjusted to fit density functional theory (DFT) calculations for both planar and non-planar model structures; the simple VFF model is shown to reliably reproduce the DFT energetics of disordered sp2-bonded carbon with various topologies and sizes. The MC bond-rotation method combined with the VFF is demonstrated to be effective in determining minimum-energy sp2-bonded carbon structures, such as topological defects and fullerenes with different sizes. The computational approach is also applied to investigate possible configurations of multi-vacancy defects (V2n, 2 <= n <= 8) and their relative stability.

Lee, Sangheon; Hwang, Gyeong S.

2011-11-01

357

Source anisotropy is a very important factor in the brachytherapy quality assurance of high-dose rate (HDR) 192Ir afterloading stepping sources. If anisotropy is not taken into account then doses received by a brachytherapy patient in certain directions can be in error by a clinically significant amount. Experimental measurements of anisotropy are very labour intensive. We have shown that within acceptable limits of accuracy, Monte Carlo integration (MCI) of a modified Sievert integral (3D generalization) can provide the necessary data within a much shorter time scale than can experiments. Hence MCI can be used for routine quality assurance schedules whenever a new design of HDR or PDR 192Ir is used for brachytherapy afterloading. Our MCI calculation results are compared with published experimental data and Monte Carlo simulation data for microSelectron and VariSource 192Ir sources. We have shown not only that MCI offers advantages over alternative numerical integration methods, but also that treating filtration coefficients as radial distance-dependent functions improves Sievert integral accuracy at low energies. This paper also provides anisotropy data for three new 192Ir sources, one for the microSelectron-HDR and two for the microSelectron-PDR, for which data are currently not available. The information we have obtained in this study can be incorporated into clinical practice. PMID:9651040

Baltas, D; Giannouli, S; Garbi, A; Diakonos, F; Geramani, K; Ioannidis, G T; Tsalpatouros, A; Uzunoglu, N; Kolotas, C; Zamboglou, N

1998-06-01

358

DSMC calculations for the delta wing. [Direct Simulation Monte Carlo method

NASA Technical Reports Server (NTRS)

Results are reported from three-dimensional direct simulation Monte Carlo (DSMC) computations, using a variable-hard-sphere molecular model, of hypersonic flow on a delta wing. The body-fitted grid is made up of deformed hexahedral cells divided into six tetrahedral subcells with well defined triangular faces; the simulation is carried out for 9000 time steps using 150,000 molecules. The uniform freestream conditions include M = 20.2, T = 13.32 K, rho = 0.00001729 kg/cu m, and T(wall) = 620 K, corresponding to lambda = 0.00153 m and Re = 14,000. The results are presented in graphs and briefly discussed. It is found that, as the flow expands supersonically around the leading edge, an attached leeside flow develops around the wing, and the near-surface density distribution has a maximum downstream from the stagnation point. Coefficients calculated include C(H) = 0.067, C(DP) = 0.178, C(DF) = 0.110, C(L) = 0.714, and C(D) = 1.089. The calculations required 56 h of CPU time on the NASA Langley Voyager CRAY-2 supercomputer.

Celenligil, M. Cevdet; Moss, James N.

1990-01-01

359

Atmospheric correction of Earth-observation remote sensing images by Monte Carlo method

NASA Astrophysics Data System (ADS)

In earth observation, the atmospheric particles contaminate severely, through absorption and scattering, the reflected electromagnetic signal from the earth surface. It will be greatly beneficial for land surface characterization if we can remove these atmospheric effects from imagery and retrieve surface reflectance that characterizes the surface properties with the purpose of atmospheric correction. Giving the geometric parameters of the studied image and assessing the parameters describing the state of the atmosphere, it is possible to evaluate the atmospheric reflectance, and upward and downward transmittances which take part in the garbling data obtained from the image. To that end, an atmospheric correction algorithm for high spectral resolution data over land surfaces has been developed. It is designed to obtain the main atmospheric parameters needed in the image correction and the interpretation of optical observations. It also estimates the optical characteristics of the Earth-observation imagery (LANDSAT and SPOT). The physics underlying the problem of solar radiation propagations that takes into account multiple scattering and sphericity of the atmosphere has been treated using Monte Carlo techniques.

Hadjit, Hanane; Oukebdane, Abdelaziz; Belbachir, Ahmad Hafid

2013-10-01

360

NASA Astrophysics Data System (ADS)

Interface roughness strongly influences the performance of germanium metal-organic-semiconductor field effect transistors (MOSFETs). In this paper, a 2D full-band Monte Carlo simulator is used to study the impact of interface roughness scattering on electron and hole transport properties in long- and short- channel Ge MOSFETs inversion layers. The carrier effective mobility in the channel of Ge MOSFETs and the in non-equilibrium transport properties are investigated. Results show that both electron and hole mobility are strongly influenced by interface roughness scattering. The output curves for 50 nm channel-length double gate n and p Ge MOSFET show that the drive currents of n- and p-Ge MOSFETs have significant improvement compared with that of Si n- and p-MOSFETs with smooth interface between channel and gate dielectric. The 82% and 96% drive current enhancement are obtained for the n- and p-MOSFETs with the completely smooth interface. However, the enhancement decreases sharply with the increase of interface roughness. With the very rough interface, the drive currents of Ge MOSFETs are even less than that of Si MOSFETs. Moreover, the significant velocity overshoot also has been found in Ge MOSFETs.

Du, Gang; Liu, Xiao-Yan; Xia, Zhi-Liang; Yang, Jing-Feng; Han, Ru-Qi

2010-05-01

361

Efficient 3D kinetic monte carlo method for modeling of molecular structure and dynamics.

Self-assembly of molecular systems is an important and general problem that intertwines physics, chemistry, biology, and material sciences. Through understanding of the physical principles of self-organization, it often becomes feasible to control the process and to obtain complex structures with tailored properties, for example, bacteria colonies of cells or nanodevices with desired properties. Theoretical studies and simulations provide an important tool for unraveling the principles of self-organization and, therefore, have recently gained an increasing interest. The present article features an extension of a popular code MBN Explorer (MesoBioNano Explorer) aiming to provide a universal approach to study self-assembly phenomena in biology and nanoscience. In particular, this extension involves a highly parallelized module of MBN Explorer that allows simulating stochastic processes using the kinetic Monte Carlo approach in a three-dimensional space. We describe the computational side of the developed code, discuss its efficiency, and apply it for studying an exemplary system. © 2014 Wiley Periodicals, Inc. PMID:24752427

Panshenskov, Mikhail; Solov'yov, Ilia A; Solov'yov, Andrey V

2014-06-30

362

Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)

Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V [Department of Optics and Biomedical Physics, N.G.Chernyshevskii Saratov State University (Russian Federation)

2006-12-31

363

NASA Astrophysics Data System (ADS)

We analyzed light intensity distributions in a subwavelength fluorescent film, which was excited by a focused electron beam. We have developed an analyzing method using Monte Carlo simulation and the finite-difference time-domain (FDTD) method. Electron scattering and trajectories were calculated by Monte Carlo simulation. Propagation and scattering of light excited with the electrons was calculated by FDTD method. A nanometric light spot was formed on the fluorescent film surface and its light intensity and its full width at half maximum (FWHM) were evaluated. We discuss the intensity and the FWHM dependence on the thickness of the fluorescent thin film and the acceleration voltage of an incident electron beam.

Inami, Wataru; Fujiwara, Jun; Masahiro, Fukuta; Ono, Atsushi; Kawata, Yoshimasa

2012-10-01

364

TRIPOLI-4.3 Monte Carlo transport code has been used to evaluate the QUADOS (Quality Assurance of Computational Tools for Dosimetry) problem P4, neutron and photon response of an albedo-type thermoluminescence personal dosemeter (TLD) located on an ISO slab phantom. Two enriched 6LiF and two 7LiF TLD chips were used and they were protected, in front or behind, with a boron-loaded dosemeter-holder. Neutron response of the four chips was determined by counting 6Li(n,t)4He events using ENDF/B-VI.4 library and photon response by estimating absorbed dose (MeV g(-1)). Ten neutron energies from thermal to 20 MeV and six photon energies from 33 keV to 1.25 MeV were used to study the energy dependence. The fraction of the neutron and photon response owing to phantom backscatter has also been investigated. Detailed TRIPOLI-4.3 solutions are presented and compared with MCNP-4C calculations. PMID:16381740

Lee, Y K

2005-01-01

365

NASA Astrophysics Data System (ADS)

Spectra of calibration sources and X-ray beams were measured with a cadmium telluride (CdTe) detector. The response function of the detector was simulated using the GEANT4 Monte Carlo toolkit. Trapping of charge carriers were taken into account using the Hecht equation in the active zone of the CdTe crystal associated with a continuous function to produce drop of charge collection efficiency near the metallic contacts and borders. The rise time discrimination is approximated by a cut in the depth of the interaction relative to cathode and corrections that depend on the pulse amplitude. The least-squares method with truncation was employed to unfold X-ray spectra typically used in medical diagnostics and the results were compared with reference data.

Moralles, M.; Bonifácio, D. A. B.; Bottaro, M.; Pereira, M. A. G.

2007-09-01

366

The phase transition of a single flexible homopolymer chain in the limit condition of dilute solution is systematically investigated using a coarse-grained model. Replica exchange Monte Carlo method is used to enhance the performance of the conformation space exploration, and thus detailed investigation of phase behavior of the system can be provided. With the designed potentials, the coil-globule transition and the liquid-solid-like transition are identified, and the transition temperatures are measured with the conformational and thermodynamic analyses. Additionally, by extrapolating the coil-globule transition temperature, T ? , and the liquid-solid-like transition temperature to the thermodynamic limit, N????, we found no "tri-critical" point in the current model. PMID:24961896

Wang, Lei; Li, Ningning; Xiao, Shiyan; Liang, Haojun

2014-07-01

367

NASA Astrophysics Data System (ADS)

We revisit the accuracy of the variational Monte Carlo (VMC) method by taking an example of ground state properties for the one-dimensional Hubbard model. We start from the variational wave functions with the Gutzwiller and long-range Jastrow factor introduced by Capello et al. [Phys. Rev. B 72, 085121 (2005)] and further improve it by considering several quantum-number projections and a generalized one-body wave function. We find that the quantum spin projection and total momentum projection greatly improve the accuracy of the ground state energy within 0.5% error, for both small and large systems at half filling. Besides, the momentum distribution function n(k) at quarter filling calculated up to 196 sites allows us direct estimate of the critical exponents of the charge correlations from the power-law behavior of n(k) near the Fermi wave vector. Estimated critical exponents well reproduce those predicted by the Tomonaga-Luttinger theory.

Kaneko, Ryui; Morita, Satoshi; Imada, Masatoshi

2013-08-01

368

NASA Astrophysics Data System (ADS)

A two-dimensional multiple-histogram method for isothermal-isobaric ensemble is discussed in detail, implemented for isothermal-isobaric Monte Carlo simulations of molecular clusters, and employed in a case study on phase changes in pure water clusters containing 15 through 21 water molecules. Full phase diagrams of these clusters are reported in the temperature-pressure plane over a broad range of temperatures (T=30-800 K) and pressures P=103-109 Pa. The main focus of the work is on the structural transformation occurring in the solid phase of these clusters and leading from cluster structures with all molecules on the cluster surface to cage-like structures with one molecule inside, and on how the transformation is influenced by increased pressure and temperature.

Vítek, Aleš; Kalus, René

2014-06-01

369

NASA Astrophysics Data System (ADS)

The direct simulation Monte Carlo method is used to study the gas slider bearing problem in a magnetic recording storage system. The flow field of a micro-channel between an inclined slider and a rotating disk is calculated under a variety of slider postures and radial positions relative to the spinning disk. The effects of the windward angle of the flying slider and the lower plate velocity of the channel on two-dimensional pressure distributions and velocity profiles are presented. The relationship of load capacity and the windward angle, and the position of the resultant force of the loading capacity, are also obtained, which can be used to optimize the design of a floating head mechanism that is composed of the slider bearing and its suspension system. The slip velocities and the shear stress are further investigated in order to determine the near-wall feature of micro-channel flow in the transition regime.

Liu, Ningyu; Yin-Kwee Ng, Eddie

2001-09-01

370

NASA Astrophysics Data System (ADS)

The author of this paper recently proposed a Monte Carlo calculation algorithm to solve a complex transport equation with complex-valued weights. The algorithm enables one to generate neutron leakage-corrected group constants and anisotropic diffusion coefficients for a unit fuel pin cell or assembly. The group constants are subsequently used for multi-group deterministic core calculations. The technique, however, had some limitations in applying itself to general problems. Some improvements have been done in this paper. The reflective boundary condition has newly become available. It has been found that a cumbersome weight cancellation of fission sources with positive and negative weights can be omitted in general fuel assembly geometries. A homogenization method of diffusion coefficients for a fuel assembly has been proposed.

Yamamoto, Toshihiro

2014-06-01

371

NASA Astrophysics Data System (ADS)

Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.

Nasser, Hassan; Marre, Olivier; Cessac, Bruno

2013-03-01

372

In the first part, an accurate and fast computational method is presented as an alternative to the Monte Carlo or deterministic transport theory codes currently used to determine the subcriticality of spent fuel storage lattices. The method is capable of analyzing storage configurations with simple or complex lattice cell geometry. It is developed based on two-group nodal diffusion theory, with

Germina Ilas

2002-01-01

373

The interatomic distance distribution, P(r), is a valuable tool for evaluating the structure of a molecule in solution and represents the maximum structural information that can be derived from solution scattering data without further assumptions. Most current instrumentation for scattering experiments (typically CCD detectors) generates a finely pixelated two-dimensional image. In continuation of the standard practice with earlier one-dimensional detectors, these images are typically reduced to a one-dimensional profile of scattering intensities, I(q), by circular averaging of the two-dimensional image. Indirect Fourier transformation methods are then used to reconstruct P(r) from I(q). Substantial advantages in data analysis, however, could be achieved by directly estimating the P(r) curve from the two-dimensional images. This article describes a Bayesian framework, using a Markov chain Monte Carlo method, for estimating the parameters of the indirect transform, and thus P(r), directly from the two-dimensional images. Using simulated detector images, it is demonstrated that this method yields P(r) curves nearly identical to the reference P(r). Furthermore, an approach for evaluating spatially correlated errors (such as those that arise from a detector point spread function) is evaluated. Accounting for these errors further improves the precision of the P(r) estimation. Experimental scattering data, where no ground truth reference P(r) is available, are used to demonstrate that this method yields a scattering and detector model that more closely reflects the two-dimensional data, as judged by smaller residuals in cross-validation, than P(r) obtained by indirect transformation of a one-dimensional profile. Finally, the method allows concurrent estimation of the beam center and D max, the longest interatomic distance in P(r), as part of the Bayesian Markov chain Monte Carlo method, reducing experimental effort and providing a well defined protocol for these parameters while also allowing estimation of the covariance among all parameters. This method provides parameter estimates of greater precision from the experimental data. The observed improvement in precision for the traditionally problematic D max is particularly noticeable.

Paul, Sudeshna; Friedman, Alan M.; Bailey-Kellogg, Chris; Craig, Bruce A.

2013-01-01

374

The interatomic distance distribution, P(r), is a valuable tool for evaluating the structure of a molecule in solution and represents the maximum structural information that can be derived from solution scattering data without further assumptions. Most current instrumentation for scattering experiments (typically CCD detectors) generates a finely pixelated two-dimensional image. In contin-uation of the standard practice with earlier one-dimensional detectors, these images are typically reduced to a one-dimensional profile of scattering inten-sities, I(q), by circular averaging of the two-dimensional image. Indirect Fourier transformation methods are then used to reconstruct P(r) from I(q). Substantial advantages in data analysis, however, could be achieved by directly estimating the P(r) curve from the two-dimensional images. This article describes a Bayesian framework, using a Markov chain Monte Carlo method, for estimating the parameters of the indirect transform, and thus P(r), directly from the two-dimensional images. Using simulated detector images, it is demonstrated that this method yields P(r) curves nearly identical to the reference P(r). Furthermore, an approach for evaluating spatially correlated errors (such as those that arise from a detector point spread function) is evaluated. Accounting for these errors further improves the precision of the P(r) estimation. Experimental scattering data, where no ground truth reference P(r) is available, are used to demonstrate that this method yields a scattering and detector model that more closely reflects the two-dimensional data, as judged by smaller residuals in cross-validation, than P(r) obtained by indirect transformation of a one-dimensional profile. Finally, the method allows concurrent estimation of the beam center and D max, the longest interatomic distance in P(r), as part of the Bayesian Markov chain Monte Carlo method, reducing experimental effort and providing a well defined protocol for these parameters while also allowing estimation of the covariance among all parameters. This method provides parameter estimates of greater precision from the experimental data. The observed improvement in precision for the traditionally problematic D max is particularly noticeable. PMID:23596342

Paul, Sudeshna; Friedman, Alan M; Bailey-Kellogg, Chris; Craig, Bruce A

2013-04-01

375

NASA Astrophysics Data System (ADS)

Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution provided a flow velocity-depth damage curve for a specific land use. More specifically, each WMCLR code execution for the agricultural sector generated a damage curve for a specific crop and for every month of the year, thus relating the damage to any crop with floodwater depth, flow velocity and the growth phase of the crop at the time of flooding. Respectively, each WMCLR code execution for the urban sector developed a damage curve for a specific building type, relating structural damage with floodwater depth and velocity. Furthermore, two techno-economic models were developed in Python programming language, in order to estimate monetary values of flood damages to the rural and the urban sector, respectively. A new Monte Carlo simulation was performed, consisting of multiple executions of the techno-economic code, which generated multiple damage cost estimates. Each execution used the proper WMCLR simulated damage curve. The uncertainty analysis of the damage estimates established the accuracy and reliability of the proposed methodology for the synthetic damage curves' development.

Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.

2014-05-01

376

Noninvasive diagnosis in medicine has shown considerable attention in recent years. Several methods are already available for imaging the biological tissue like X-ray computerized tomography, magentic resonance imaging and ultrasound imaging et c. But each of these methods has its own disadvantages. Optical tomography which uses NIR light is one of the emerging methods in teh field of medical imaging

Ashwani Aggarwal; Ram M. Vasu

2003-01-01

377

NASA Technical Reports Server (NTRS)

The 2013-2022 Decaedal survey for planetary exploration has identified probe missions to Uranus and Saturn as high priorities. This work endeavors to examine the uncertainty for determining aeroheating in such entry environments. Representative entry trajectories are constructed using the TRAJ software. Flowfields at selected points on the trajectories are then computed using the Data Parallel Line Relaxation (DPLR) Computational Fluid Dynamics Code. A Monte Carlo study is performed on the DPLR input parameters to determine the uncertainty in the predicted aeroheating, and correlation coefficients are examined to identify which input parameters show the most influence on the uncertainty. A review of the present best practices for input parameters (e.g. transport coefficient and vibrational relaxation time) is also conducted. It is found that the 2(sigma) - uncertainty for heating on Uranus entry is no more than 2.1%, assuming an equilibrium catalytic wall, with the uncertainty being determined primarily by diffusion and H(sub 2) recombination rate within the boundary layer. However, if the wall is assumed to be partially or non-catalytic, this uncertainty may increase to as large as 18%. The catalytic wall model can contribute over 3x change in heat flux and a 20% variation in film coefficient. Therefore, coupled material response/fluid dynamic models are recommended for this problem. It was also found that much of this variability is artificially suppressed when a constant Schmidt number approach is implemented. Because the boundary layer is reacting, it is necessary to employ self-consistent effective binary diffusion to obtain a correct thermal transport solution. For Saturn entries, the 2(sigma) - uncertainty for convective heating was less than 3.7%. The major uncertainty driver was dependent on shock temperature/velocity, changing from boundary layer thermal conductivity to diffusivity and then to shock layer ionization rate as velocity increases. While radiative heating for Uranus entry was negligible, the nominal solution for Saturn computed up to 20% radiative heating at the highest velocity examined. The radiative heating followed a non-normal distribution, with up to a 3x variation in magnitude. This uncertainty is driven by the H(sub 2) dissociation rate, as H(sub 2) that persists in the hot non-equilibrium zone contributes significantly to radiation.

Palmer, Grant; Prabhu, Dinesh; Cruden, Brett A.

2013-01-01

378

The purpose of this work was to extend the verification of Monte Carlo based methods for estimating radiation dose in computed tomography (CT) exams beyond a single CT scanner to a multidetector CT (MDCT) scanner, and from cylindrical CTDI phantom measurements to both cylindrical and physical anthropomorphic phantoms. Both cylindrical and physical anthropomorphic phantoms were scanned on an MDCT under the specified conditions. A pencil ionization chamber was used to record exposure for the cylindrical phantom, while MOSFET (metal oxide semiconductor field effect transistor) detectors were used to record exposure at the surface of the anthropomorphic phantom. Reference measurements were made in air at isocentre using the pencil ionization chamber under the specified conditions. Detailed Monte Carlo models were developed for the MDCT scanner to describe the x-ray source (spectra, bowtie filter, etc) and geometry factors (distance from focal spot to isocentre, source movement due to axial or helical scanning, etc). Models for the cylindrical (CTDI) phantoms were available from the previous work. For the anthropomorphic phantom, CT image data were used to create a detailed voxelized model of the phantom's geometry. Anthropomorphic phantom material compositions were provided by the manufacturer. A simulation of the physical scan was performed using the mathematical models of the scanner, phantom and specified scan parameters. Tallies were recorded at specific voxel locations corresponding to the MOSFET physical measurements. Simulations of air scans were performed to obtain normalization factors to convert results to absolute dose values. For the CTDI body (32 cm) phantom, measurements and simulation results agreed to within 3.5% across all conditions. For the anthropomorphic phantom, measured surface dose values from a contiguous axial scan showed significant variation and ranged from 8 mGy/100 mAs to 16 mGy/100 mAs. Results from helical scans of overlapping pitch (0.9375) and extended pitch (1.375) were also obtained. Comparisons between the MOSFET measurements and the absolute dose value derived from the Monte Carlo simulations demonstrate agreement in terms of absolute dose values as well as the spatially varying characteristics. This work demonstrates the ability to extend models from a single detector scanner using cylindrical phantoms to an MDCT scanner using both cylindrical and anthropomorphic phantoms. Future work will be extended to voxelized patient models of different sizes and to other MDCT scanners. PMID:16177525

DeMarco, J J; Cagnon, C H; Cody, D D; Stevens, D M; McCollough, C H; O'Daniel, J; McNitt-Gray, M F

2005-09-01

379

NASA Astrophysics Data System (ADS)

The purpose of this work was to extend the verification of Monte Carlo based methods for estimating radiation dose in computed tomography (CT) exams beyond a single CT scanner to a multidetector CT (MDCT) scanner, and from cylindrical CTDI phantom measurements to both cylindrical and physical anthropomorphic phantoms. Both cylindrical and physical anthropomorphic phantoms were scanned on an MDCT under the specified conditions. A pencil ionization chamber was used to record exposure for the cylindrical phantom, while MOSFET (metal oxide semiconductor field effect transistor) detectors were used to record exposure at the surface of the anthropomorphic phantom. Reference measurements were made in air at isocentre using the pencil ionization chamber under the specified conditions. Detailed Monte Carlo models were developed for the MDCT scanner to describe the x-ray source (spectra, bowtie filter, etc) and geometry factors (distance from focal spot to isocentre, source movement due to axial or helical scanning, etc). Models for the cylindrical (CTDI) phantoms were available from the previous work. For the anthropomorphic phantom, CT image data were used to create a detailed voxelized model of the phantom's geometry. Anthropomorphic phantom material compositions were provided by the manufacturer. A simulation of the physical scan was performed using the mathematical models of the scanner, phantom and specified scan parameters. Tallies were recorded at specific voxel locations corresponding to the MOSFET physical measurements. Simulations of air scans were performed to obtain normalization factors to convert results to absolute dose values. For the CTDI body (32 cm) phantom, measurements and simulation results agreed to within 3.5% across all conditions. For the anthropomorphic phantom, measured surface dose values from a contiguous axial scan showed significant variation and ranged from 8 mGy/100 mAs to 16 mGy/100 mAs. Results from helical scans of overlapping pitch (0.9375) and extended pitch (1.375) were also obtained. Comparisons between the MOSFET measurements and the absolute dose value derived from the Monte Carlo simulations demonstrate agreement in terms of absolute dose values as well as the spatially varying characteristics. This work demonstrates the ability to extend models from a single detector scanner using cylindrical phantoms to an MDCT scanner using both cylindrical and anthropomorphic phantoms. Future work will be extended to voxelized patient models of different sizes and to other MDCT scanners.

DeMarco, J. J.; Cagnon, C. H.; Cody, D. D.; Stevens, D. M.; McCollough, C. H.; O'Daniel, J.; McNitt-Gray, M. F.

2005-09-01

380

NASA Astrophysics Data System (ADS)

In performing a Bayesian analysis of astronomical data, two difficult problems often emerge. First, in estimating the parameters of some model for the data, the resulting posterior distribution may be multimodal or exhibit pronounced (curving) degeneracies, which can cause problems for traditional Markov Chain Monte Carlo (MCMC) sampling methods. Secondly, in selecting between a set of competing models, calculation of the Bayesian evidence for each model is computationally expensive using existing methods such as thermodynamic integration. The nested sampling method introduced by Skilling, has greatly reduced the computational expense of calculating evidence and also produces posterior inferences as a by-product. This method has been applied successfully in cosmological applications by Mukherjee, Parkinson & Liddle, but their implementation was efficient only for unimodal distributions without pronounced degeneracies. Shaw, Bridges & Hobson recently introduced a clustered nested sampling method which is significantly more efficient in sampling from multimodal posteriors and also determines the expectation and variance of the final evidence from a single run of the algorithm, hence providing a further increase in efficiency. In this paper, we build on the work of Shaw et al. and present three new methods for sampling and evidence evaluation from distributions that may contain multiple modes and significant degeneracies in very high dimensions; we also present an even more efficient technique for estimating the uncertainty on the evaluated evidence. These methods lead to a further substantial improvement in sampling efficiency and robustness, and are applied to two toy problems to demonstrate the accuracy and economy of the evidence calculation and parameter estimation. Finally, we discuss the use of these methods in performing Bayesian object detection in astronomical data sets, and show that they significantly outperform existing MCMC techniques. An implementation of our methods will be publicly released shortly.

Feroz, F.; Hobson, M. P.

2008-02-01

381

A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP)

NASA Astrophysics Data System (ADS)

Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.

Bitar, A.; Lisbona, A.; Thedrez, P.; Sai Maurel, C.; LeForestier, D.; Barbet, J.; Bardies, M.

2007-02-01

382

NASA Astrophysics Data System (ADS)

An accurate knowledge of the photon spectra emitted by X-ray tubes in radiodiagnostic is essential to better estimate the imparted dose to patients and to improve the quality image obtained with these devices. In this work, it is proposed the use of a flat panel detector together with a PMMA wedge to estimate the actual X-ray spectrum using the Monte Carlo method and unfolding techniques. The MCNP5 code has been used to model different flat panels (based on indirect and direct methods to produce charge carriers from absorbed X-rays) and to obtain the dose curves and system response functions. Most of the actual flat panel devices use scintillator materials that present K-edge discontinuities in the mass energy-absorption coefficient, which strongly affect the response matrix. In this paper, the applicability of different flat panels for reconstructing X-ray spectra is studied. The effect of the mass energy-absorption coefficient of the scintillator material has been studied on the response matrix and consequently, in the reconstructed spectra. Different unfolding methods are tested to reconstruct the actual X-ray spectrum knowing the dose curve and the response function. It has been concluded that the regularization method MTSVD is appropriate to unfold X-ray spectra in all the scintillators studied.

Gallardo, Sergio; Pozuelo, Fausto; Querol, Andrea; Ródenas, José; Verdú, Gumersindo

2014-06-01

383

The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code MANTIS, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fastDETECT2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the MANTIS code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify PENELOPE (the open source software package that handles the x-ray and electron transport in MANTIS) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fastDETECT2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybridMANTIS approach achieves a significant speed-up factor of 627 when compared to MANTIS and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybridMANTIS, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical tox-ray transport. The new code requires much less memory than MANTIS and, asa result, allows us to efficiently simulate large area detectors. PMID:22469917

Sharma, Diksha; Badal, Andreu; Badano, Aldo

2012-04-21

384

NASA Astrophysics Data System (ADS)

Parametric uncertainty in groundwater modeling is commonly assessed using the first-order-second-moment method, which yields the linear confidence/prediction intervals. More advanced techniques are able to produce the nonlinear confidence/prediction intervals that are more accurate than the linear intervals for nonlinear models. However, both the methods are restricted to certain assumptions such as normality in model parameters. We developed a Markov Chain Monte Carlo (MCMC) method to directly investigate the parametric distributions and confidence/prediction intervals. The MCMC results are used to evaluate accuracy of the linear and nonlinear confidence/prediction intervals. The MCMC method is applied to nonlinear surface complexation models developed by Kohler et al. (1996) to simulate reactive transport of uranium (VI). The breakthrough data of Kohler et al. (1996) obtained from a series of column experiments are used as the basis of the investigation. The calibrated parameters of the models are the equilibrium constants of the surface complexation reactions and fractions of functional groups. The Morris method sensitivity analysis shows that all of the parameters exhibit highly nonlinear effects on the simulation. The MCMC method is combined with traditional optimization method to improve computational efficiency. The parameters of the surface complexation models are first calibrated using a global optimization technique, multi-start quasi-Newton BFGS, which employs an approximation to the Hessian. The parameter correlation is measured by the covariance matrix computed via the Fisher information matrix. Parameter ranges are necessary to improve convergence of the MCMC simulation, even when the adaptive Metropolis method is used. The MCMC results indicate that the parameters do not necessarily follow a normal distribution and that the nonlinear intervals are more accurate than the linear intervals for the nonlinear surface complexation models. In comparison with the linear and nonlinear prediction intervals, the prediction intervals of MCMC are more robust to simulate the breakthrough curves that are not used for the parameter calibration and estimation of parameter distributions.

Miller, G. L.; Lu, D.; Ye, M.; Curtis, G. P.; Mendes, B. S.; Draper, D.

2010-12-01

385

Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes

The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.

Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.

2002-09-11

386

On the Relationship Between Markov chain Monte Carlo Methods for Model Uncertainty

This article considers Markov chain computational methods for incorporating uncer- tainty about the dimension of a parameter when performing inference within a Bayesian setting. A general class of methods is proposed for performing such computations, based upon a product space representation of the problem which is similar to that of Carlin and Chib. It is shown that all of the

Simon J. GODSILL

2001-01-01

387

Efficient Markov Chain Monte Carlo Methods for Decoding Neural Spike Trains

Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior

Yashar Ahmadian; Jonathan W. Pillow; Liam Paninski

2011-01-01

388

NASA Astrophysics Data System (ADS)

Particle filters (PFs) have become popular for assimilation of a wide range of hydrologic variables in recent years. With this increased use, it has become necessary to increase the applicability of this technique for use in complex hydrologic/land surface models and to make these methods more viable for operational probabilistic prediction. To make the PF a more suitable option in these scenarios, it is necessary to improve the reliability of these techniques. Improved reliability in the PF is achieved in this work through an improved parameter search, with the use of variable variance multipliers and Markov Chain Monte Carlo methods. Application of these methods to the PF allows for greater search of the posterior distribution, leading to more complete characterization of the posterior distribution and reducing risk of sample impoverishment. This leads to a PF that is more efficient and provides more reliable predictions. This study introduces the theory behind the proposed algorithm, with application on a hydrologic model. Results from both real and synthetic studies suggest that the proposed filter significantly increases the effectiveness of the PF, with marginal increase in the computational demand for hydrologic prediction.

Moradkhani, Hamid; Dechant, Caleb M.; Sorooshian, Soroosh

2012-12-01

389

Self-Learning Off-Lattice Kinetic Monte Carlo method as applied to growth on metal surfaces

NASA Astrophysics Data System (ADS)

We propose a new development in the Self-Learning Kinetic Monte Carlo (SLKMC) method with the goal of improving the accuracy with which atomic mechanisms controlling diffusive processes on metal surfaces may be identified. This is important for diffusion of small clusters (2 - 20 atoms) in which atoms may occupy Off-Lattice positions. Such a procedure is also necessary for consideration of heteroepitaxial growth. The new technique combines an earlier version of SLKMC [1] with the inclusion of off-lattice occupancy. This allows us to include arbitrary positions of adatoms in the modeling and makes the simulations more realistic and reliable. We have tested this new approach for the case of the diffusion of small 2D Cu clusters diffusion on Cu(111) and found good performance and satisfactory agreement with results obtained from previous version of SLKMC. The new method also helped reveal a novel atomic mechanism contributing to cluster migration. We have also applied this method to study the diffusion of Cu clusters on Ag(111), and find that Cu atoms generally prefer to occupy off-lattice sites. [1] O. Trushin, A. Kara, A. Karim, T.S. Rahman Phys. Rev B 2005

Trushin, Oleg; Kara, Abdelkader; Rahman, Talat

2007-03-01

390

Random Finite Sets and Sequential Monte Carlo Methods in Multi-Target Tracking.

National Technical Information Service (NTIS)

Random finite sets provide a rigorous foundation for optimal Bayes multi-target filtering. The major hurdle faced in Bayes multi-target filtering is the inherent computational intractability of the method. Even the Probability Hypothesis Density (PHD) fil...

B. Vo S. Singh A. Doucet

2005-01-01

391

Phase space modulation method for EPID-based Monte Carlo dosimetry of IMRT and RapidArc plans

NASA Astrophysics Data System (ADS)

Quality assurance for IMRT and VMAT require 3D evaluation of the dose distributions from the treatment planning system as compared to the distributions reconstructed from signals acquired during the plan delivery. This study presents the results of the dose reconstruction based on a novel method of Monte Carlo (MC) phase space modulation. Typically, in MC dose calculations the linear accelerator (linac) is modelled for each field in the plan and a phase space file (PSF) containing all relevant particle information is written for each field. Particles from the PSFs are then used in the dose calculation. This study investigates a method of omitting the modelling of the linac in cases where the treatment has been measured by an electronic portal imaging device. In this method each portal image is deconvolved using an empirically fit scatter kernel to obtain the primary photon fluence. The Phase Space Modulation (PSM) method consists of simulating the linac just once to create a large PSF for an open field and then modulating it using the delivered primary particle fluence. Reconstructed dose distributions in phantoms were produced using MC and the modulated PSFs. The kernel derived for this method accurately reproduced the dose distributions for 3×3, 10×10, and 15×15 cm2 field sizes (mean relative dose-difference along the beam central axis is under 1%). The method has been applied to IMRT pre-treatment verification of 10 patients (including one RapidArcTM case), mean dose in the structures of interest agreed with that calculated by MC directly within 1%, and 95% of the voxels passed 2%/2mm criteria.

Berman, Avery; Townson, Reid; Bush, Karl; Zavgorodni, Sergei

2010-11-01

392

The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called ''equivalent'' source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL{sub 1} and HVL{sub 2}) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL{sub 1} and HVL{sub 2} measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL{sub 1} and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types result in simulations with an average root mean square (RMS) error between the measured and simulated values of approximately 5% across all scanner and bowtie filter combinations, all kVps, both phantom sizes, and both measurement positions, while data provided from the manufacturers gave an average RMS error of approximately 12% pooled across all conditions. While there was no statistically significant difference between the two types of equivalent source models, both of these model types were shown to be statistically significantly different from the source model based on manufacturer's data. These results demonstrate that an equivalent source model based only on measured values can be used in place of manufacturer's data for Monte Carlo simulations for MDCT dosimetry.

Turner, Adam C.; Zhang Di; Kim, Hyun J.; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; Primak, Andrew N.; McCollough, Cynthia H.; McNitt-Gray, Michael F. [Department of Biomedical Physics and Department of Radiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States); Department of Biostatistics and Department of Radiology, University of California, Los Angeles, Los Angeles, California 90024 (United States); Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States); Department of Biomedical Physics and Department of Radiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States); Division of Diagnostic Imaging, University of Texas M. D. Anderson Cancer Center, Houston, Texas 77030 (United States); Department of Radiology, Mayo Clinic College of Medicine, Rochester, Minnesota 55901 (United States); Department of Biomedical Physics and Department of Radiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States)

2009-06-15

393

The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called "equivalent" source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL1 and HVL2) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL1 and HVL2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types result in simulations with an average root mean square (RMS) error between the measured and simulated values of approximately 5% across all scanner and bowtie filter combinations, all kVps, both phantom sizes, and both measurement positions, while data provided from the manufacturers gave an average RMS error of approximately 12% pooled across all conditions. While there was no statistically significant difference between the two types of equivalent source models, both of these model types were shown to be statistically significantly different from the source model based on manufacturer's data. These results demonstrate that an equivalent source model based only on measured values can be used in place of manufacturer's data for Monte Carlo simulations for MDCT dosimetry. PMID:19610304

Turner, Adam C; Zhang, Di; Kim, Hyun J; DeMarco, John J; Cagnon, Chris H; Angel, Erin; Cody, Dianna D; Stevens, Donna M; Primak, Andrew N; McCollough, Cynthia H; McNitt-Gray, Michael F

2009-06-01

394

The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called “equivalent” source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer’s data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL1 and HVL2) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL1 and HVL2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner’s manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types result in simulations with an average root mean square (RMS) error between the measured and simulated values of approximately 5% across all scanner and bowtie filter combinations, all kVps, both phantom sizes, and both measurement positions, while data provided from the manufacturers gave an average RMS error of approximately 12% pooled across all conditions. While there was no statistically significant difference between the two types of equivalent source models, both of these model types were shown to be statistically significantly different from the source model based on manufacturer’s data. These results demonstrate that an equivalent source model based only on measured values can be used in place of manufacturer’s data for Monte Carlo simulations for MDCT dosimetry.

Turner, Adam C.; Zhang, Di; Kim, Hyun J.; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; Primak, Andrew N.; McCollough, Cynthia H.; McNitt-Gray, Michael F.

2009-01-01

395

NASA Astrophysics Data System (ADS)

An essential component of a quantitative landslide hazard assessment is establishing the extent of the endangered area. This task requires accurate prediction of the run-out behaviour of a landslide, which includes the estimation of the run-out distance, run-out width, velocities, pressures, and depth of the moving mass and the final configuration of the deposits. One approach to run-out modelling is to reproduce accurately the dynamics of the propagation processes. A number of dynamic numerical models are able to compute the movement of the flow over irregular topographic terrains (3-D) controlled by a complex interaction between mechanical properties that may vary in space and time. Given the number of unknown parameters and the fact that most of the rheological parameters cannot be measured in the laboratory or field, the parametrization of run-out models is very difficult in practice. For this reason, the application of run-out models is mostly used for back-analysis of past events and very few studies have attempted to achieve forward predictions. Consequently all models are based on simplified descriptions that attempt to reproduce the general features of the failed mass motion through the use of parameters (mostly controlling shear stresses at the base of the moving mass) which account for aspects not explicitly described or oversimplified. The uncertainties involved in the run-out process have to be approached in a stochastic manner. It is of significant importance to develop methods for quantifying and properly handling the uncertainties in dynamic run-out models, in order to allow a more comprehensive approach to quantitative risk assessment. A method was developed to compute the variation in run-out intensities by using a dynamic run-out model (MassMov2D) and a probabilistic framework based on a Monte Carlo simulation in order to analyze the effect of the uncertainty of input parameters. The probability density functions of the rheological parameters were generated and sampled leading to a large number of run-out scenarios. In the application of the Monte Carlo method, random samples were generated from the input probability distributions that fitted a Gaussian copula distribution. Each set of samples was used as input to model simulation and the resulting outcome was a spatially displayed intensity map. These maps were created with the results of the probability density functions at each point of the flow track and the deposition zone, having as an output a confidence probability map for the various intensity measures. The goal of this methodology is that the results (in terms of intensity characteristics) can be linked directly to vulnerability curves associated to the elements at risk.

Cepeda, Jose; Luna, Byron Quan; Nadim, Farrokh

2013-04-01

396

In this letter we evaluate the accuracy of the first reaction method (FRM) as commonly used to reduce the computational complexity of mesoscale Monte Carlo simulations of geminate recombination and the performance of organic photovoltaic devices. A wide range of carrier mobilities, degrees of energetic disorder, and applied electric field are considered. For the ranges of energetic disorder relevant for

Chris Groves; Robin G. E. Kimber; Alison B. Walker

2010-01-01

397

The purpose of this work is to develop and test a method to estimate the relative and absolute absorbed radiation dose from axial and spiral CT scans using a Monte Carlo approach. Initial testing was done in phantoms and preliminary results were obtained from a standard mathematical anthropomorphic model (MIRD V) and voxelized patient data. To accomplish this we have

G. Jarry; J. J. DeMarco; U. Beifuss; C. H. Cagnon; M. F. McNitt-Gray

2003-01-01

398

Accurate prediction of complex phenomena can be greatly enhanced through the use of data and observations to update simulations. The ability to create these data-driven simulations is limited by error and uncertainty in both the data and the simulation. The stochastic engine project addressed this problem through the development and application of a family of Markov Chain Monte Carlo methods

R E Glaser; G Johannesson; S Sengupta; B Kosovic; S Carle; G A Franz; R D Aines; J J Nitao; W G Hanley; A L Ramirez; R L Newmark; V M Johnson; K M Dyer; K A Henderson; G A Sugiyama; T L Hickling; M E Pasyanos; D A Jones; R J Grimm; R A Levine

2004-01-01

399

A modified phenomenological model is constructed for the simulation of rarefied flows of polyatomic non-polar gas molecules by the direct simulation Monte Carlo (DSMC) method. This variable hard sphere-based model employs a constant rotational collision number, but all its collisions are inelastic in nature and at the same time the correct macroscopic relaxation rate is maintained. In equilibrium conditions, there

P S Prasanth; Jose K Kakkassery; R Vijayakumar

2012-01-01

400

Apart from a few specific cases (finite state space models, linear Gaussian state-space models), maximum likelihood parameter estimation in hidden Markov models is a dicult task. Using sequential Monte Carlo (or particle filtering) techniques for this task sounds appealing but is faced with some diculties. In this contribution, we show that several recently proposed methods share the common feature of

Olivier Cappe; Eric Moulines

401

Constrained bearings-only target motion analysis via Markov chain Monte Carlo methods

The aim of this paper is to develop methods for estimating the range of a moving target from bearings-only observations and for weakly observable scenarios, by including general constraints about the target trajectories. Throughout this manuscript, it is assumed that the target motion is conditionally deterministic, which leads us to focus on batch algorithms. Another common assumption is poor observability,

Frederic Bavencoff; Jean-Michel Vanpeperstraete; J.-PIERRE LE CADRE

2006-01-01

402

Study of reliability of grid connected photovoltaic power based on Monte Carlo method

Research the influence of the reliability after the photovoltaic power station connected to distribution network. First, The basic method and analysis index of the reliability analysis to distribution network with photovoltaic power are introduced. Then, the example model has been established, that PV power is connected to the node of downstream end of the trunk line in distribution system. The

Guo-hua Yang; Yi Li; Qi YONG; Rong Yong

2011-01-01

403

Bridge Reliability Analysis Based on the FEM and Monte-Carlo Method

The paper combines finite Element Method and Fatigue Cumulative Damage Theory to make finite element analysis of repeated stress caused by random vehicle dynamic loading effect on concrete girder bridge as well as the cumulative fatigue damage. Besides, the paper also uses finite element software to analyze the dynamic fatigue reliability of existing road bridges from the perspective of random

Wang Xiangyang; Zhao Guanghui

2010-01-01

404

Quasi-Monte Carlo methods for the numerical integration of multivariate walsh series

In [1], a method for the numerical integration of multivariate Walsh series, based on low-discrepancy point sets, was developed. In the present paper, we improve and generalize error estimates given in [1] and disprove a conjecture stated in [1,2].

G. Larcher; W. Ch. Schmid; R. Wolf

1996-01-01

405

NASA Astrophysics Data System (ADS)

The development of tools for nuclear data uncertainty propagation in lattice calculations are presented. The Total Monte Carlo method and the Generalized Perturbation Theory method are used with the code DRAGON to allow propagation of nuclear data uncertainties in transport calculations. Both methods begin the propagation of uncertainties at the most elementary level of the transport calculation - the Evaluated Nuclear Data File. The developed tools are applied to provide estimates for response uncertainties of a PWR cell as a function of burnup.

Sabouri, P.; Bidaud, A.; Dabiran, S.; Lecarpentier, D.; Ferragut, F.

2014-04-01

406

Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging. PMID:24763652

Peter, Silvia; Modregger, Peter; Fix, Michael K; Volken, Werner; Frei, Daniel; Manser, Peter; Stampanoni, Marco

2014-05-01

407

Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging.

Peter, Silvia; Modregger, Peter; Fix, Michael K.; Volken, Werner; Frei, Daniel; Manser, Peter; Stampanoni, Marco

2014-01-01

408

Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

Matilainen, Kaarina; Mantysaari, Esa A.; Lidauer, Martin H.; Stranden, Ismo; Thompson, Robin

2013-01-01

409

Assessment of Numerical Accuracy of PDF\\/Monte Carlo Methods for Turbulent Reacting Flows

This study is to explore the numerical features of a particle-mesh algorithm developed for a stand-alone joint velocity-frequency-composition PDF method for turbulent reactive flows. Numerical experiments are performed on a piloted-jet nonpremixed turbulent flame of methane to characterize and quantify various numerical errors in terms of numerical parameters: number of particles per cellNpc, number of cells M2, and time step

J. Xu; S. B. Pope

1999-01-01

410

A Maximum Likelihood Method for Linking of Particle-in-Cell and Monte Carlo Simulations

To support the design and analysis of x-ray radiographic facilities and experiments at Los Alamos, we have developed an integrated chain model, which is a set of linked physics simulation codes to generate self-consistent synthetic radiographs of experiments (1). The expectation-maximization (EM) algorithm (2) has recently been used to great advantage to link particle- in-cell (PIC) methods, which model electron

Kevin J. Bowers; Barbara G. DeVolder; Thomas J. T. Kwan; Lin Yin

2001-01-01

411

NASA Astrophysics Data System (ADS)

Diffusion tensor tractography (DTT) allows one to explore axonal connectivity patterns in neuronal tissue by linking local predominant diffusion directions determined by diffusion tensor imaging (DTI). The majority of existing tractography approaches use continuous coordinates for calculating single trajectories through the diffusion tensor field. The tractography algorithm we propose is characterized by (1) a trajectory propagation rule that uses voxel centres as vertices and (2) orientation probabilities for the calculated steps in a trajectory that are obtained from the diffusion tensors of either two or three voxels. These voxels include the last voxel of each previous step and one or two candidate successor voxels. The precision and the accuracy of the suggested method are explored with synthetic data. Results clearly favour probabilities based on two consecutive successor voxels. Evidence is also provided that in any voxel-centre-based tractography approach, there is a need for a probability correction that takes into account the geometry of the acquisition grid. Finally, we provide examples in which the proposed fibre-tracking method is applied to the human optical radiation, the cortico-spinal tracts and to connections between Broca's and Wernicke's area to demonstrate the performance of the proposed method on measured data.

Bodammer, N. C.; Kaufmann, J.; Kanowski, M.; Tempelmann, C.

2009-02-01

412

On multitarget track extraction and maintenance using sequential Monte Carlo methods

NASA Astrophysics Data System (ADS)

This contribution addresses the problem of tracking multiple moving objects simultaneously over time given measurements with false alarms and missing detections. Such a task becomes particularly intricate if the initial target number and targets states are unknown and if the individual targets are not separable. We employ a sequential Bayesian framework based on the Finite Set Statistic approach in order to estimate the number of targets and the target states simultaneously. The iterative .ltering equations are solved numerically using Particle Filter techniques. We describe a method for sequential tack extraction of multiple targets without relying on external information and present results for small groups of ground moving targets.

Ulmke, Martin

2005-09-01

413

A fast method to gather neighbors in vectorized Monte Carlo simulations

NASA Astrophysics Data System (ADS)

An algorithm is presented, which gathers neighbors for n-dimensional lattices with periodic boundary conditions. The method used does not need the usual GATHER instructions. With the exception of a few bit-vectors no additional storage is needed. The algorithm can be easily extended to more than nearest neighbor interactions. In a simulation of SU(3) lattice gauge theory on a 2-pipe CDC CYBER 205, using 32-bit arithmetic, the time spent on gathering neighboring links is reduced from 5.7 ?s to 1.3 ?s per link per update, and an update time of 17 ?s per link is obtained.

Vohwinkel, Claus

1988-11-01

414

NASA Astrophysics Data System (ADS)

Gravitational waves are on the verge of opening a brand new window on the Universe. However, gravitational wave astronomy comes with very unique challenges in data analysis and signal processing in order to lead to new discoveries in astrophysics. Among the sources of gravitational waves, inspiraling binary systems of compact objects, neutron stars and/or black holes in the mass range 1Msun--100Msun stand out as likely to be detected and relatively easy to model. The detection of a gravitational wave event is challenging and will be a rewarding achievement by itself. After such a detection, measurement of source properties holds major promise for improving our astrophysical understanding and requires reliable methods for parameter estimation and model selection. This is a complicated problem, because of the large number of parameters (15 for spinning compact objects in a quasi-circular orbit) and the degeneracies between them, the significant amount of structure in the parameter space, and the particularities of the detector noise. This work presents the development of a parameter-estimation and model-selection algorithm, based on Bayesian statistical theory and using Markov chain Monte Carlo methods for ground-based gravitational-wave detectors (LIGO and Virgo). This method started from existing non-spinning and single spin stand-alone analysis codes and was developed into a method able to tackle the complexity of fully spinning systems, and infer all spinning parameters of a compact binary. Not only are spinning parameters believed to be astrophysically significant, but this work has shown that not including them in the analysis can lead to biases in parameter recovery. This work made it possible to answer several scientific questions involving parameter estimation of inspiraling spinning compact objects, which are addressed in the chapters of this dissertation.

Raymond, Vivien

415

Classification using standard statistical methods such as linear discriminant analysis (LDA) or logistic regression (LR) presume knowledge of group membership prior to the development of an algorithm for prediction. However, in many real world applications members of the same nominal group, might in fact come from different subpopulations on the underlying construct. For example, individuals diagnosed with depression will not all have the same levels of this disorder, though for the purposes of LDA or LR they will be treated in the same manner. The goal of this simulation study was to examine the performance of several methods for group classification in the case where within group membership was not homogeneous. For example, suppose there are 3 known groups but within each group two unknown classes. Several approaches were compared, including LDA, LR, classification and regression trees (CART), generalized additive models (GAM), and mixture discriminant analysis (MIXDA). Results of the study indicated that CART and mixture discriminant analysis were the most effective tools for situations in which known groups were not homogeneous, whereas LDA, LR, and GAM had the highest rates of misclassification. Implications of these results for theory and practice are discussed.

Finch, W. Holmes; Bolin, Jocelyn H.; Kelley, Ken

2014-01-01

416

Classification using standard statistical methods such as linear discriminant analysis (LDA) or logistic regression (LR) presume knowledge of group membership prior to the development of an algorithm for prediction. However, in many real world applications members of the same nominal group, might in fact come from different subpopulations on the underlying construct. For example, individuals diagnosed with depression will not all have the same levels of this disorder, though for the purposes of LDA or LR they will be treated in the same manner. The goal of this simulation study was to examine the performance of several methods for group classification in the case where within group membership was not homogeneous. For example, suppose there are 3 known groups but within each group two unknown classes. Several approaches were compared, including LDA, LR, classification and regression trees (CART), generalized additive models (GAM), and mixture discriminant analysis (MIXDA). Results of the study indicated that CART and mixture discriminant analysis were the most effective tools for situations in which known groups were not homogeneous, whereas LDA, LR, and GAM had the highest rates of misclassification. Implications of these results for theory and practice are discussed. PMID:24904445

Finch, W Holmes; Bolin, Jocelyn H; Kelley, Ken

2014-01-01

417

NASA Astrophysics Data System (ADS)

Health prognosis of equipment is considered as a key process of the condition based mainte