While these samples are representative of the content of Science.gov,

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of Science.gov

to obtain the most current and comprehensive results.

Last update: November 12, 2013.

1

NASA Astrophysics Data System (ADS)

Comparison of different Monte Carlo codes for understanding their limitations is essential to avoid systematic errors in the simulation, and to suggest further improvement for the codes. MCNP4C and EGSnrc, two Monte Carlo codes commonly used in medical physics, were compared and evaluated against electron depth-dose data and experimental results obtained using clinical radiotherapy beams. Different physical models and algorithms used in the codes give significantly different depth-dose curves. The default version of MCNP4C calculates electron depth-dose curves which are too much penetrating. The MCNP4C results agree better with the experiment if the Integrated Tiger Series-style energy-indexing algorithm is used. EGSnrc uses a class II-Condensed History (CH) scheme for the simulation of electron transport. To conclude the comparison, a timing study was performed. It was noted that EGSnrc is generally faster than MCNP4C and the use of a large number of scoring voxels dramatically slows down the MCNP4C calculation. However, the use of a large number of geometry voxels in MCNP4C only slightly affects the speed of the calculation.

Jabbari, N.; Hashemi-Malayeri, B.; Farajollahi, A. R.; Kazemnejad, A.; Shafaei, A.; Jabbari, S.

2007-08-01

2

Comparison of different Monte Carlo codes for understanding their limitations is essential to avoid systematic errors in the simulation, and to suggest further improvement for the codes. MCNP4C and EGSnrc, two Monte Carlo codes commonly used in medical physics, were compared and evaluated against electron depth-dose data and experimental results obtained using clinical radiotherapy beams. Different physical models and algorithms

N. Jabbari; B. Hashemi-Malayeri; A. R. Farajollahi; A. Kazemnejad; A. Shafaei; S. Jabbari

2007-01-01

3

NEPHTIS: 2D/3D validation elements using MCNP4c and TRIPOLI4 Monte-Carlo codes

High Temperature Reactors (HTRs) appear as a promising concept for the next generation of nuclear power applications. The CEA, in collaboration with AREVA-NP and EDF, is developing a core modeling tool dedicated to the prismatic block-type reactor. NEPHTIS (Neutronics Process for HTR Innovating System) is a deterministic codes system based on a standard two-steps Transport-Diffusion approach (APOLLO2/CRONOS2). Validation of such deterministic schemes usually relies on Monte-Carlo (MC) codes used as a reference. However, when dealing with large HTR cores the fission source stabilization is rather poor with MC codes. In spite of this, it is shown in this paper that MC simulations may be used as a reference for a wide range of configurations. The first part of the paper is devoted to 2D and 3D MC calculations of a HTR core with control devices. Comparisons between MCNP4c and TRIPOLI4 MC codes are performed and show very consistent results. Finally, the last part of the paper is devoted to the code to code validation of the NEPHTIS deterministic scheme. (authors)

Courau, T.; Girardi, E. [EDF R and D/SINETICS, 1av du General de Gaulle, F92141 Clamart CEDEX (France); Damian, F.; Moiron-Groizard, M. [DEN/DM2S/SERMA/LCA, CEA Saclay, F91191 Gif-sur-Yvette CEDEX (France)

2006-07-01

4

NASA Astrophysics Data System (ADS)

The energy dependence of the radiochromic film (RCF) response to beta-emitting sources was studied by dose theoretical calculations, employing the MCNP4C and EGSnrc/BEAMnrc Monte Carlo codes. Irradiations with virtual monochromatic electron sources, electron and photon clinical beams, a 32P intravascular brachytherapy (IVB) source and other beta-emitting radioisotopes (188Re, 90Y, 90Sr/90Y,32P) were simulated. The MD-55-2 and HS radiochromic films (RCFs) were considered, in a planar or cylindrical irradiation geometry, with water or polystyrene as the surrounding medium. For virtual monochromatic sources, a monotonic decrease with energy of the dose absorbed to the film, with respect to that absorbed to the surrounding medium, was evidenced. Considering the IVB 32P source and the MD-55-2 in a cylindrical geometry, the calibration with a 6 MeV electron beam would yield dose underestimations from 14 to 23%, increasing the source-to-film radial distance from 1 to 6 mm. For the planar beta-emitting sources in water, calibrations with photon or electron clinical beams would yield dose underestimations between 5 and 12%. Calibrating the RCF with 90Sr/90Y, the MD-55-2 would yield dose underestimations between 3 and 5% for 32P and discrepancies within ±2% for 188Re and 90Y, whereas for the HS the dose underestimation would reach 4% with 188Re and 6% with 32P.

Pacilio, M.; Aragno, D.; Rauco, R.; D'Onofrio, S.; Pressello, M. C.; Bianciardi, L.; Santini, E.

2007-07-01

5

The condensed-history electron transport algorithms in the Monte Carlo code MCNP4C are derived from ITS 3.0, which is a well-validated code for coupled electronphoton simulations. This, combined with its user-friendliness and versatility, makes MCNP4C a promising code for medical physics applications. Such applications, however, require a high degree of accuracy. In this work, MCNP4C electron depth-dose distributions in water are

Dennis R Schaart; Jan Th M Jansen; Johannes Zoetelief; Piet F A de Leege

2002-01-01

6

Calculation of the store house worker dose in a lost wax foundry using MCNP-4C.

Lost wax casting is an industrial process which permits the transmutation into metal of models made in wax. The wax model is covered with a silicaceous shell of the required thickness and once this shell is built the set is heated and wax melted. Liquid metal is then cast into the shell replacing the wax. When the metal is cool, the shell is broken away in order to recover the metallic piece. In this process zircon sands are used for the preparation of the silicaceous shell. These sands have varying concentrations of natural radionuclides: 238U, 232Th and 235U together with their progenics. The zircon sand is distributed in bags of 50 kg, and 30 bags are on a pallet, weighing 1,500 kg. The pallets with the bags have dimensions 80 cm x 120 cm x 80 cm, and constitute the radiation source in this case. The only pathway of exposure to workers in the store house is external radiation. In this case there is no dust because the bags are closed and covered by plastic, the store house has a good ventilation rate and so radon accumulation is not possible. The workers do not touch with their hands the bags and consequently skin contamination will not take place. In this study all situations of external irradiation to the workers have been considered; transportation of the pallets from vehicle to store house, lifting the pallets to the shelf, resting of the stock on the shelf, getting down the pallets, and carrying the pallets to production area. Using MCNP-4C exposure situations have been simulated, considering that the source has a homogeneous composition, the minimum stock in the store house is constituted by 7 pallets, and the several distances between pallets and workers when they are at work. The photons flux obtained by MCNP-4C is multiplied by the conversion factor of Flux to Kerma for air by conversion factor to Effective Dose by Kerma unit, and by the number of emitted photons. Those conversion factors are obtained of ICRP 74 table 1 and table 17 respectively. This is the way to obtain a function giving dose rate around the source. PMID:16604600

Alegrķa, Natalia; Legarda, Fernando; Herranz, Margarita; Idoeta, Raquel

2005-01-01

7

Monte Carlo simulations were performed to characterize various shielding configurations for an iodine-123 (I-123) imaging system. The system comprises two small (10×10 cm) field-of-view (FOV) gamma cameras to be used for conjugate imaging of I-123 labeled brain agents. 83% of I-123 decays result in 159 keV gamma emissions, which can be readily imaged; while 3% result in emissions with energies

D. N. Jangha; Robert A. Mintzer; John D. Valentine; John N. Aarsvold

2001-01-01

8

This paper is intended to be a tutorial on multigrid Monte Carlo techniques, illustrated with two examples. Path-integral quantum Monte Carlo is seen to take only a finite amount of computer time even as the paths are discretized on infinitesimally small scales. A method for eliminating critical slowing down completely/emdash/even for models with discrete degrees of freedom, as in Potts models, or discrete excitations, such as isolated vortices in the XY model/emdash/is presented. 11 refs., 1 fig.

Loh, E. Jr.

1988-01-01

9

Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

Zimmerman, G.B.

1997-06-24

10

Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

Zimmerman, George B. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

1997-04-15

11

The MCNPX Monte Carlo Radiation Transport Code

MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4c and has been upgraded to most MCNP5 capabilities. MCNP is a highly

Laurie S. Waters; Gregg W. McKinney; Joe W. Durkee; Michael L. Fensin; John S. Hendricks; Michael R. James; Russell C. Johns; Denise B. Pelowitz

2007-01-01

12

Lattice-switch Monte Carlo method

We present a Monte Carlo method for the direct evaluation of the difference between the free energies of two crystal structures. The method is built on a lattice-switch transformation that maps a configuration of one structure onto a candidate configuration of the other by ``switching'' one set of lattice vectors for the other, while keeping the displacements with respect to

A. D. Bruce; A. N. Jackson; G. J. Ackland; N. B. Wilding

2000-01-01

13

The use of low-energy photon emitters for brachytherapy applications, as in the treatment of the prostate or of eye tumours, has drastically increased in the last few years. New seed models for 103Pd and 125I have recently been introduced. The American Association of Physicists in Medicine recommends that measurements are made to obtain the dose rate constant, the radial dose

B. Reniers; F. Verhaegen; S. Vynckier

2004-01-01

14

Monte Carlo methods for high Tc superconductors

NASA Astrophysics Data System (ADS)

We explore in depth two zero-temperature Monte Carlo methods and apply the techniques to models of high temperature superconductors. Variational Monte Carlo provides a basis for comparing t - J model states from the literature to states we develop which capture striped phenomena seen in density matrix renormalization group (DMRG) studies. Green's function Monte Carlo (GFMC) is discussed in detail with special attention to the sources of error in the method that are not statistical in nature: finite numbers of walkers and nodal structure approximations. We find that approximating the nodes can prevent the GFMC from reaching an unbiased ground state. Two signals of this bias, the hole density and spin-spin correlations are presented from simulations on a small cluster and represent conflicting "ground state" properties found by supplying the GFMC with varying nodal structures. Therefore we cannot confirm or refute the existence of stripes in the t - J model from a Monte Carlo standpoint within the parameter ranges relevant to high temperature super-conductivity. We find that the controversy concerning the nature of t - J model ground state can be attributed to this bias in the GFMC method.

Amadon, Jerry Christopher

15

Lattice-switch Monte Carlo method

We present a Monte Carlo method for the direct evaluation of the difference\\u000abetween the free energies of two crystal structures. The method is built on a\\u000alattice-switch transformation that maps a configuration of one structure onto a\\u000acandidate configuration of the other by `switching' one set of lattice vectors\\u000afor the other, while keeping the displacements with respect to

A. D. Bruce; A. N. Jackson; G. J. Ackland; N. B. Wilding

2000-01-01

16

Benchmark analysis of the TRIGA MARK II research reactor using Monte Carlo techniques

This study deals with the neutronic analysis of the current core configuration of a 3-MW TRIGA MARK II research reactor at Atomic Energy Research Establishment (AERE), Savar, Dhaka, Bangladesh and validation of the results by benchmarking with the experimental, operational and available Final Safety Analysis Report (FSAR) values. The 3-D continuous-energy Monte Carlo code MCNP4C was used to develop a

M. Q. Huda; M. Rahman; M. M. Sarker; S. I. Bhuiyan

2004-01-01

17

Gamma-ray spectrometry analysis of pebble bed reactor fuel using Monte Carlo simulations

Monte Carlo simulations were used to study the gamma-ray spectra of pebble bed reactor fuel at various levels of burnup. A fuel depletion calculation was performed using the ORIGEN2.1 code, which yielded the gamma-ray source term that was introduced into the input of an MCNP4C simulation. The simulation assumed the use of a 100% efficient high-purity coaxial germanium (HPGe) detector,

Jianwei Chen; Ayman I. Hawari; Zhongxiang Zhao; Bingjing Su

2003-01-01

18

Lattice-switch Monte Carlo method

NASA Astrophysics Data System (ADS)

We present a Monte Carlo method for the direct evaluation of the difference between the free energies of two crystal structures. The method is built on a lattice-switch transformation that maps a configuration of one structure onto a candidate configuration of the other by ``switching'' one set of lattice vectors for the other, while keeping the displacements with respect to the lattice sites constant. The sampling of the displacement configurations is biased, multicanonically, to favor paths leading to gateway arrangements for which the Monte Carlo switch to the candidate configuration will be accepted. The configurations of both structures can then be efficiently sampled in a single process, and the difference between their free energies evaluated from their measured probabilities. We explore and exploit the method in the context of extensive studies of systems of hard spheres. We show that the efficiency of the method is controlled by the extent to which the switch conserves correlated microstructure. We also show how, microscopically, the procedure works: the system finds gateway arrangements which fulfill the sampling bias intelligently. We establish, with high precision, the differences between the free energies of the two close packed structures (fcc and hcp) in both the constant density and the constant pressure ensembles.

Bruce, A. D.; Jackson, A. N.; Ackland, G. J.; Wilding, N. B.

2000-01-01

19

Calculating Pi Using the Monte Carlo Method

NASA Astrophysics Data System (ADS)

During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo.1 Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.

Williamson, Timothy

2013-11-01

20

Monte Carlo Methods in Transition Metal Alloys

NASA Astrophysics Data System (ADS)

Giant moments of several Bohr magnetons are formed in transition metal alloys where the matrix is palladium or platinum. The interaction between these giant moments produces a phase transition from paramagnetism to ferromagnetism, when the alloy is below the Curie temperature. These giant moments can be measured mainly by neutron diffraction, although several characteristics can be determined by magnetization measurements. In this work, several magnetic properties of these alloys are presented, based on calculations made mainly by Monte Carlo simulation of these properties. A localized moment model is used to simulate the formation of magnetization clouds and thier transformation as the temperature is raised. The simulation allows the calculation of the critical temperatures of ferromagnetism, which are then compared with experimental measurements. In several of these alloys, unpolarized diffuse neutron scattering measurements show large forward peaks that would indicate giant moments larger than those obtained by magnetization measurements. We calculated, using Monte Carlo simulation, the diffuse neutron cross sections for these alloys and reproduced the neutron data. We find a significant quasielastic contribution to the scattering that can not be attributed to the magnetization cloud. The calculation methods were applied to several dilute and concentrated transition metal alloys. The results indicate that the methods and models used are valid for a large group of Pd and Pt based alloys.

Parra, Rixio

1997-08-01

21

Methods for Monte Carlo simulations of biomacromolecules

The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.

Vitalis, Andreas; Pappu, Rohit V.

2010-01-01

22

Improved method for implicit Monte Carlo

The Implicit Monte Carlo (IMC) method has been used for over 30 years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Reference [2] provided an exact error analysis of IMC for 0-D problems and demonstrated that IMC can exhibit substantial errors when timesteps are large. These temporal errors are inherent in the method and are in addition to spatial discretization errors and approximations that address nonlinearities (due to variation of physical constants). In Reference [3], IMC and four other methods were analyzed in detail and compared on both theoretical grounds and the accuracy of numerical tests. As discussed in, two alternative schemes for solving the radiative transfer equations, the Carter-Forest (C-F) method and the Ahrens-Larsen (A-L) method, do not exhibit the errors found in IMC; for 0-D, both of these methods are exact for all time, while for 3-D, A-L is exact for all time and C-F is exact within a timestep. These methods can yield substantially superior results to IMC.

Brown, F. B. (Forrest B.); Martin, W. R. (William R.)

2001-01-01

23

An experimental measurement study involving surface profilometry and scanning electron microscopy (SEM) combined with MCNP4C-based Monte Carlo simulations was performed to evaluate anode surface roughness in aged X-ray tubes and to quantitatively predict its impact and relevance on generated diagnostic X-ray spectra. Surface profilometry determined that the center-line average roughness in the most aged x-ray tube evaluated in our study

A. Mehranian; M. R. Ay; N. Riyahi Alam; H. Zaidi

2009-01-01

24

Comparative analysis of stationary-phase Monte Carlo methods

The authors consider the stationary-phase Monte Carlo method and a variety of related approaches. The stationary-phase Monte Carlo method is aimed at the generic problem of performing high-dimensional integrations of rapidly oscillatory integrands. Real time numerical path integration is one important class of applications where such problems arise. They examine the relationship between the stationary-phase Monte Carlo approach and the recent work of Makri and Miller and Filinov.

Doll, J.D.; Freeman, D.L.

1988-06-02

25

A Local Superbasin Kinetic Monte Carlo Method

NASA Astrophysics Data System (ADS)

A ubiquitous problem in atomic-scale simulation of materials is the small-barrier problem, in which the free-energy landscape presents ``superbasins'' with low intra-basin energy barriers relative to the inter-basin barriers. Rare-event simulation methods, such as kinetic Monte Carlo (KMC) and accelerated molecular dynamics, are inefficient for such systems because considerable effort is spent simulating short-time, intra-basin motion without evolving the system significantly. We developed an adaptive local-superbasin KMC algorithm (LSKMC) for treating fast, intra-basin motion using a Master-equation / Markov-chain approach and long-time evolution using KMC. Our algorithm is designed to identify local superbasins in an on-the-fly search during conventional KMC, construct the rate matrix, compute the mean exit time and its distribution, obtain the probability to exit to each of the superbasin border (absorbing) states, and integrate superbasin exits with non-superbasin moves. We demonstrate various aspects of the method in several examples, which also highlight the efficiency of the method.

Fichthorn, Kristen; Lin, Yangzheng

2013-03-01

26

Neutron transport calculations using Quasi-Monte Carlo methods

This paper examines the use of quasirandom sequences of points in place of pseudorandom points in Monte Carlo neutron transport calculations. For two simple demonstration problems, the root mean square error, computed over a set of repeated runs, is found to be significantly less when quasirandom sequences are used ({open_quotes}Quasi-Monte Carlo Method{close_quotes}) than when a standard Monte Carlo calculation is performed using only pseudorandom points.

Moskowitz, B.S.

1997-07-01

27

Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

Urbatsch, T.J.

1995-11-01

28

This work presents an extensive study on Monte Carlo radiation transport simulation and thermoluminescent (TL) dosimetry for characterising mixed radiation fields (neutrons and photons) occurring in nuclear reactors. The feasibility of these methods is investigated for radiation fields at various locations of the Portuguese Research Reactor (RPI). The performance of the approaches developed in this work is compared with dosimetric techniques already existing at RPI. The Monte Carlo MCNP-4C code was used for a detailed modelling of the reactor core, the fast neutron beam and the thermal column of RPI. Simulations using these models allow to reproduce the energy and spatial distributions of the neutron field very well (agreement better than 80%). In the case of the photon field, the agreement improves with decreasing intensity of the component related to fission and activation products. (7)LiF:Mg,Ti, (7)LiF:Mg,Cu,P and Al(2)O(3):Mg,Y TL detectors (TLDs) with low neutron sensitivity are able to determine photon dose and dose profiles with high spatial resolution. On the other hand, (nat)LiF:Mg,Ti TLDs with increased neutron sensitivity show a remarkable loss of sensitivity and a high supralinearity in high-intensity fields hampering their application at nuclear reactors. PMID:16702246

Fernandes, A C; Gonēalves, I C; Santos, J; Cardoso, J; Santos, L; Ferro Carvalho, A; Marques, J G; Kling, A; Ramalho, A J G; Osvay, M

2006-05-15

29

Vectorized Monte Carlo Methods for Reactor Lattice Analysis.

National Technical Information Service (NTIS)

This report details some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-energy Monte Carlo code for use on the CYBER-205 computer. While the principal applicatio...

F. B. Brown

1982-01-01

30

Perturbation Monte Carlo methods for tissue structure alterations.

This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ?15-25% of the scattering parameters. PMID:24156056

Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome

2013-09-04

31

Perturbation Monte Carlo methods for tissue structure alterations

This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ?1525% of the scattering parameters.

Nguyen, Jennifer; Hayakawa, Carole K.; Mourant, Judith R.; Spanier, Jerome

2013-01-01

32

The radial depthdose distribution of a prototype 188W\\/188Re ? particle line source of known activity has been measured in a PMMA phantom, using a novel, ultra-thin type of LiF:Mg,Cu,P thermoluminescent detector (TLD). The measured radial dose function of this intravascular brachytherapy source agrees well with MCNP4C Monte Carlo simulations, which indicate that 188Re accounts for ?99% of the dose between

Dennis R Schaart; Adrie J J Bos; August J M Winkelman; Martijn C Clarijs

2002-01-01

33

A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation

Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)

Li, Z.; Wang, K. [Dept. of Engineering Physics, Tsinghua Univ., Beijing, 100084 (China)

2012-07-01

34

An adaptive sequential Monte Carlo method for approximate Bayesian computation

Approximate Bayesian computation (ABC) is a popular approach to address inference problems where the likelihood function is\\u000a intractable, or expensive to calculate. To improve over Markov chain Monte Carlo (MCMC) implementations of ABC, the use of\\u000a sequential Monte Carlo (SMC) methods has recently been suggested. Most effective SMC algorithms that are currently available\\u000a for ABC have a computational complexity that

Pierre Del Moral; Arnaud Doucet; Ajay Jasra

35

Extending the alias Monte Carlo sampling method to general distributions

The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte

A. L. Edwards; J. A. Rathkopf; R. K. Smidt

1991-01-01

36

Theory of the Monte Carlo method for semiconductor device simulation

A brief review of the semiclassical Monte Carlo (MC) method for semiconductor device simulation is given, covering the standard MC algorithms, variance reduction techniques, the self-consistent solution, and the physical semiconductor model. A link between physically based MC methods and the numerical method of MC integration is established. The integral representations the transient and the steady-state Boltzmann equations are presented

Hans Kosina; Michail Nedjalkov; Siegfried Selberherr

2000-01-01

37

A Multivariate Time Series Method for Monte Carlo Reactor Analysis

A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor.

Taro Ueki

2008-08-14

38

A new method to assess Monte Carlo convergence

The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1{radical}N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and {integral}{sub {minus}{infinity}}{sup {infinity}} f(x) dx = 1. Since f(x) is seldom known explicitly, Monte Carlo particle random walks sample f(x) implicitly. Unless there is a largest possible history score, the empirical f(x) must eventually decrease more steeply than l/x{sup 3} for the second moment ({integral}{sub {minus}{infinity}}{sup {infinity}} x{sup 2}f(x) dx) to exist.

Forster, R.A.; Booth, T.E.; Pederson, S.P.

1993-05-01

39

Purpose: The accurate prediction of x-ray spectra under typical conditions encountered in clinical x-ray examination procedures and the assessment of factors influencing them has been a long-standing goal of the diagnostic radiology and medical physics communities. In this work, the influence of anode surface roughness on diagnostic x-ray spectra is evaluated using MCNP4C-based Monte Carlo simulations. Methods: An image-based modeling method was used to create realistic models from surface-cracked anodes. An in-house computer program was written to model the geometric pattern of cracks and irregularities from digital images of focal track surface in order to define the modeled anodes into MCNP input file. To consider average roughness and mean crack depth into the models, the surface of anodes was characterized by scanning electron microscopy and surface profilometry. It was found that the average roughness (R{sub a}) in the most aged tube studied is about 50 {mu}m. The correctness of MCNP4C in simulating diagnostic x-ray spectra was thoroughly verified by calling its Gaussian energy broadening card and comparing the simulated spectra with experimentally measured ones. The assessment of anode roughness involved the comparison of simulated spectra in deteriorated anodes with those simulated in perfectly plain anodes considered as reference. From these comparisons, the variations in output intensity, half value layer (HVL), heel effect, and patient dose were studied. Results: An intensity loss of 4.5% and 16.8% was predicted for anodes aged by 5 and 50 {mu}m deep cracks (50 kVp, 6 deg. target angle, and 2.5 mm Al total filtration). The variations in HVL were not significant as the spectra were not hardened by more than 2.5%; however, the trend for this variation was to increase with roughness. By deploying several point detector tallies along the anode-cathode direction and averaging exposure over them, it was found that for a 6 deg. anode, roughened by 50 {mu}m deep cracks, the reduction in exposure is 14.9% and 13.1% for 70 and 120 kVp tube voltages, respectively. For the evaluation of patient dose, entrance skin radiation dose was calculated for typical chest x-ray examinations. It was shown that as anode roughness increases, patient entrance skin dose decreases averagely by a factor of 15%. Conclusions: It was concluded that the anode surface roughness can have a non-negligible effect on output spectra in aged x-ray imaging tubes and its impact should be carefully considered in diagnostic x-ray imaging modalities.

Mehranian, A.; Ay, M. R.; Alam, N. Riyahi; Zaidi, H. [Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, P.O. Box 14155-6447, Tehran (Iran, Islamic Republic of) and Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, P.O. Box 14185-615, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, P.O. Box 14155-6447, Tehran (Iran, Islamic Republic of); Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, P.O. Box 14185-615, Tehran (Iran, Islamic Republic of) and Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, P.O. Box 14155-6447, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, P.O. Box 14155-6447, Tehran (Iran, Islamic Republic of); Division of Nuclear Medicine, Geneva University Hospital, CH-1211 Geneva (Switzerland) and Geneva Neuroscience Center, Geneva University, CH-1205 Geneva (Switzerland)

2010-02-15

40

Proton therapy analysis using the Monte Carlo method.

The range and straggling data obtained from the transport of ions in matter (TRIM) computer program were used to determine the trajectories of monoenergetic 60 MeV protons in muscle tissue by using the Monte Carlo technique. The appropriate profile for the shape of a proton pencil beam in proton therapy as well as the dose deposited in the tissue were computed. The good agreements between our results as compared with the corresponding experimental values are presented here to show the reliability of our Monte Carlo method. PMID:16094775

Noshad, Houshyar; Givechi, Nasim

2005-10-01

41

New possibilities and applications of Monte-Carlo methods

NASA Astrophysics Data System (ADS)

Potential new areas of application for the Monte Carlo method are discussed. The computation of the heat transfer and drag characteristics of aerobraked vehicles is one significant application, since the latitudes at which these systems would be deployed extend above the range which can be experimentally simulated. The application of Monte Carlo techniques to gas film lubrication problems associated with head-tape and head-disk interactions would represent the first time that this procedure has been applied in this extremely low speed regime. Some computational techniques which can be used to advantage in Monte Carlo calculations are also outlined, including the use of transformed body-fitted coordinate systems to reduce the time required to identify the cell location of a molecule and the use of an adaptive cell structure to place cells in preferred locations as the flowfield develops.

Merkle, C. L.

42

Monte Carlo simulation methods for reliability estimation and failure prognostics

\\u000a Monte Carlo Simulation (MCS) offers a powerful means for modeling the stochastic failure behaviour of engineered structures,\\u000a systems and components (SSC). This paper summarises current work on advanced MCS methods for reliability estimation and failure\\u000a prognostics.

Enrico Zio

43

Quasi-Monte Carlo methods in cash flow testing simulations

What actuaries call cash flow testing is a large-scale simulation pitting a company's current policy obligation against future earnings based on interest rates. While life contingency issues associated with contract payoff are a mainstay of the actuarial sciences, modeling the random fluctuations of US Treasury rates is less studied. Furthermore, applying standard simulation techniques, such as the Monte Carlo method,

Michael G. Hilgers

2000-01-01

44

A Survey of Monte Carlo Tree Search Methods

Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature

Cameron B. Browne; Edward Powley; Daniel Whitehouse; Simon M. Lucas; Peter I. Cowling; Philipp Rohlfshagen; Stephen Tavener; Diego Perez; Spyridon Samothrakis; Simon Colton

2012-01-01

45

Improved Monte Carlo Method for Evaluating Multidimensional Integrals.

National Technical Information Service (NTIS)

The goal of this work was to develop an improved Monte Carlo method and implement a computer code for performing automatic integration of multidimensional integrals of the form integral f(X)dX over a closed region in k-dimensional Euclidean space, where X...

S. K. Yuen

1977-01-01

46

A simplified variable metric hybrid Monte Carlo method

NASA Astrophysics Data System (ADS)

We present a variable metric Hybrid Monte Carlo method following the ideas in [3], and propose a choice of such a metric which results efficient in the case of the sampling from the potential of a stiff spring. This is the first step in the extension of these ideas to deal with more general potentials appearing in Molecular Dynamics.

Calvo, M. P.; Rodrigo, I.; Sanz-Serna, J. M.

2013-10-01

47

Bayesian methods, maximum entropy, and quantum Monte Carlo

We heuristically discuss the application of the method of maximum entropy to the extraction of dynamical information from imaginary-time, quantum Monte Carlo data. The discussion emphasizes the utility of a Bayesian approach to statistical inference and the importance of statistically well-characterized data. 14 refs.

Gubernatis, J.E.; Silver, R.N. (Los Alamos National Lab., NM (United States)); Jarrell, M. (Cincinnati Univ., OH (United States))

1991-01-01

48

Parallelization of the Worldline Quantum Monte Carlo Method

The worldline quantum Monte Carlo method is a computational technique for studying the properties of many-electron and quantum-spin systems. In this paper, we describe our efforts in developing an efficient implementation of this method for the massively-parallel Connection Machine CM-2. We discuss why one must look beyond the obvious parallelism in the method in order to reduce interprocessor communication and

J. E. Gubernatis; W. R. Somsky

1992-01-01

49

An Adaptive Markov Chain Monte Carlo Method for GARCH Model

NASA Astrophysics Data System (ADS)

We propose a method to construct a proposal density for the Metropolis-Hastings algorithm in Markov Chain Monte Carlo (MCMC) simulations of the GARCH model. The proposal density is constructed adaptively by using the data sampled by the MCMC method itself. It turns out that autocorrelations between the data generated with our adaptive proposal density are greatly reduced. Thus it is concluded that the adaptive construction method is very efficient and works well for the MCMC simulations of the GARCH model.

Takaishi, Tetsuya

50

A new method to assess Monte Carlo convergence

The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1[radical]N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and [integral][sub [minus][infinity

Forster, R.A.; Booth, T.E.; Pederson, S.P.

1993-01-01

51

A multilayer Monte Carlo method with free phase function choice

NASA Astrophysics Data System (ADS)

This paper presents an adaptation of the widely accepted Monte Carlo method for Multi-layered media (MCML). Its original Henyey-Greenstein phase function is an interesting approach for describing how light scattering inside biological tissues occurs. It has the important advantage of generating deflection angles in an efficient - and therefore computationally fast- manner. However, in order to allow the fast generation of the phase function, the MCML code generates a distribution for the cosine of the deflection angle instead of generating a distribution for the deflection angle, causing a bias in the phase function. Moreover, other, more elaborate phase functions are not available in the MCML code. To overcome these limitations of MCML, it was adapted to allow the use of any discretized phase function. An additional tool allows generating a numerical approximation for the phase function for every layer. This could either be a discretized version of (1) the Henyey-Greenstein phase function, (2) a modified Henyey-Greenstein phase function or (3) a phase function generated from the Mie theory. These discretized phase functions are then stored in a look-up table, which can be used by the adapted Monte Carlo code. The Monte Carlo code with flexible phase function choice (fpf-MC) was compared and validated with the original MCML code. The novelty of the developed program is the generation of a user-friendly algorithm, which allows several types of phase functions to be generated and applied into a Monte Carlo method, without compromising the computational performance.

Watté, R.; Aernouts, B.; Saeys, W.

2012-05-01

52

Quasi-Monte Carlo methods with applications in finance

We review the basic principles of quasi-Monte Carlo (QMC) methods, the randomizations that turn them into variance-reduction\\u000a techniques, the integration error and variance bounds obtained in terms of QMC point set discrepancy and variation of the\\u000a integrand, and the main classes of point set constructions: lattice rules, digital nets, and permutations in different bases.\\u000a QMC methods are designed to estimate

Pierre LEcuyer

2009-01-01

53

Uncertainties in external dosimetry: analytical vs. Monte Carlo method.

Over the years, the International Commission on Radiological Protection (ICRP) and other organisations have formulated recommendations regarding uncertainty in occupational dosimetry. The most practical and widely accepted recommendations are the trumpet curves. To check whether routine dosemeters comply with them, a Technical Report on uncertainties issued by the International Electrotechnical Commission (IEC) can be used. In this report, the analytical method is applied to assess the uncertainty of a dosemeter fulfilling an IEC standard. On the other hand, the Monte Carlo method can be used to assess the uncertainty. In this work, a direct comparison of the analytical and the Monte Carlo methods is performed using the same input data. It turns out that the analytical method generally overestimates the uncertainty by about 10-30 %. Therefore, the results often do not comply with the recommendations of the ICRP regarding uncertainty. The results of the more realistic uncertainty evaluation using the Monte Carlo method usually comply with the recommendations of the ICRP. This is confirmed by results seen in regular tests in Germany. PMID:19942627

Behrens, R

2009-11-26

54

RTS&T Monte Carlo code (facilities and computation methods)

The paper describes facilities and computation methods of the new RTS&T Monte Carlo code. This code performs simulations of three dimensional electromagnetic shower development and low energy neutron production and transport in accelerator and in shielding components with a calculation of the isotope transmutation problem. RTS&T is based on a compilation from ENDF\\/B-VI, JENDL-3, EAF, FENDL and EPNDL evaluated data

A. I. Blokhiny; I. I. Degtyarev; A. E. Lokhovitskii; M. A. Maslov; I. A. Yazynin

1997-01-01

55

MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS

Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.

R. ESTEP; ET AL

2000-06-01

56

This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)

2012-08-15

57

Parallelization of the Worldline Quantum Monte Carlo Method

NASA Astrophysics Data System (ADS)

The worldline quantum Monte Carlo method is a computational technique for studying the properties of many-electron and quantum-spin systems. In this paper, we describe our efforts in developing an efficient implementation of this method for the massively-parallel Connection Machine CM-2. We discuss why one must look beyond the obvious parallelism in the method in order to reduce interprocessor communication and increase processor utilization, and how these goals may be achieved using a plaquette-based data representation. We also present performance statistics for our implementation and sample calculations for the spinless fermion model.

Gubernatis, J. E.; Somsky, W. R.

58

Parallelization of the worldline quantum Monte Carlo method

The worldline quantum Monte Carlo method is a computational technique for studying the properties of many-electron and quantum-spin systems. In this paper, the authors describe their efforts in developing an efficient implementation of this method for the massively-parallel Connection Machine CM-2. They discuss why one must look beyond the obvious parallelism in the method in order to reduce interprocessor communication and increase processor utilization, and how these goals may be achieved using a plaquette-based data representation. They also present performance statistics for the implementation and sample calculations for the spinless fermion model.

Gubernatis, J.E.; Somsky, W.R. (Los Alamos National Lab., Los Alamos, NM (United States))

1992-01-01

59

Novel extrapolation method in the Monte Carlo shell model

We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of {sup 56}Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g{sub 9/2}-shell calculation of {sup 64}Ge.

Shimizu, Noritaka; Abe, Takashi [Department of Physics, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); Utsuno, Yutaka [Advanced Science Research Center, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1195 (Japan); Mizusaki, Takahiro [Institute of Natural Sciences, Senshu University, Tokyo, 101-8425 (Japan); Otsuka, Takaharu [Department of Physics, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); Center for Nuclear Study, University of Tokyo, Hongo Tokyo 113-0033 (Japan); National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, Michigan (United States); Honma, Michio [Center for Mathematical Sciences, Aizu University, Ikki-machi, Aizu-Wakamatsu, Fukushima 965-8580 (Japan)

2010-12-15

60

Solving the many body pairing problem through Monte Carlo methods

NASA Astrophysics Data System (ADS)

Nuclear superconductivity is a central part of quantum many-body dynamics. In mesoscopic systems such as atomic nuclei, this phenomenon is influenced by shell effects, mean-field deformation, particle decay, and by other collective and chaotic components of nucleon motion. The ability to find an exact solution to these pairing correlations is of particular importance. In this presentation we develop and investigate the effectiveness of different methods of attacking the nucleon pairing problem in nuclei. In particular, we concentrate on the Monte Carlo approach. We review the configuration space Monte Carlo techniques, the Suzuki-Trotter breakup of the time evolution operator, and treatment of the pairing problem with non-constant matrix elements. The quasi-spin symmetry allows for a mapping of the pairing problem onto a problem of interacting spins which in turn can be solved using a Monte Carlo approach. The algorithms are investigated for convergence to the true ground state of model systems and calculated ground state energies are compared to those found by an exact diagonalization method. The possibility to include other non-pairing interaction components of the Hamiltonian is also investigated.

Lingle, Mark; Volya, Alexander

2012-03-01

61

On Monte Carlo Methods and Applications in Geoscience

NASA Astrophysics Data System (ADS)

Monte Carlo methods are designed to study various deterministic problems using probabilistic approaches, and with computer simulations to explore much wider possibilities for the different algorithms. Pseudo- Random Number Generators (PRNGs) are based on linear congruences of some large prime numbers, while Quasi-Random Number Generators (QRNGs) provide low discrepancy sequences, both of which giving uniformly distributed numbers in (0,1). Chaotic Random Number Generators (CRNGs) give sequences of 'random numbers' satisfying some prescribed probabilistic density, often denser around the two corners of interval (0,1), but transforming this type of density to a uniform one is usually possible. Markov Chain Monte Carlo (MCMC), as indicated by its name, is associated with Markov Chain simulations. Basic descriptions of these random number generators will be given, and a comparative analysis of these four methods will be included based on their efficiencies and other characteristics. Some applications in geoscience using Monte Carlo simulations will be described, and a comparison of these algorithms will also be included with some concluding remarks.

Zhang, Z.; Blais, J.

2009-05-01

62

Response kernel density estimation Monte Carlo method for electron transport

NASA Astrophysics Data System (ADS)

Electron transport simulation plays an important role in the dose calculation in electron cancer therapy as well as in many other fields. Traditional numerical solutions for particle transport are inadequate because of the extremely anisotropic collisions between electrons and the background medium. In principle, analog Monte Carlo (AMC) can be used, however, the large cross section for coulombic interactions makes it of limited use due to the large amount of computer time required for typical simulations. Several techniques, such as the condensed history random walk technique, have been proposed and investigated to improve the efficiency of AMC. However, the approximations used in these techniques either reduce their accuracy, or restrict them to certain applications. The response kernel density estimation Monte Carlo method (RKMC) proposed in this study attempts to improve the efficiency of AMC without sacrificing accuracy. A complete Monte Carlo electron transport calculation is divided into two steps in RKMC. In the first step, or local calculation, a series of AMC simulations are performed to collect electron state data in phase space after the electrons experience multiple scattering. In the second step, or global calculation, the adaptive kernel density estimation method is used to construct the probability density functions (pdf's) from the recorded data set, which are sampled efficiently by a specially designed Monte Carlo sampling scheme. Since the electron multiple scattering pdf's come from the AMC simulations, all the effects of multiple scattering are taken into consideration. Therefore, RKMC is expected to be both accurate and efficient. A RKMC code was developed first to test the method against an AMC code. The test case results showed that the RKMC code was approximately 100 times faster than the AMC code. The method was also incorporated into EGS4, an industry standard electron transport condensed history Monte Carlo (CHMC) code, as a replacement for Moliere's multiple scattering theory (MMST). A clear improvement in both accuracy and efficiency over EGS4 was observed for the low and intermediate energy range (10 keV to 20 MeV) electron transport simulations, because lateral displacements are considered in RKMC and the restrictions on transport step size are eliminated. All the results of our study show that RKMC is a promising method for electron transport simulations.

Du, Jie

63

Mammography X-Ray Spectra Simulated with Monte Carlo

Monte Carlo calculations have been carried out to obtain the x-ray spectra of various target-filter combinations for a mammography unit. Mammography is widely used to diagnose breast cancer. Further to Mo target with Mo filter combination, Rh/Rh, Mo/Rh, Mo/Al, Rh/Al, and W/Rh are also utilized. In this work Monte Carlo calculations, using MCNP 4C code, were carried out to estimate the x-ray spectra produced when a beam of 28 keV electrons did collide with Mo, Rh and W targets. Resulting x-ray spectra show characteristic x-rays and continuous bremsstrahlung. Spectra were also calculated including filters.

Vega-Carrillo, H. R.; Gonzalez, J. Ramirez; Manzanares-Acuna, E.; Hernandez-Davila, V. M.; Villasana, R. Hernandez; Mercado, G. A. [Universidad Autonoma de Zacatecas Apdo. Postal 336, 98000 Zacatecas, Zac. Mexico (Mexico)

2008-08-11

64

Stabilized multilevel Monte Carlo method for stiff stochastic differential equations

NASA Astrophysics Data System (ADS)

A multilevel Monte Carlo (MLMC) method for mean square stable stochastic differential equations with multiple scales is proposed. For such problems, that we call stiff, the performance of MLMC methods based on classical explicit methods deteriorates because of the time step restriction to resolve the fastest scales that prevents to exploit all the levels of the MLMC approach. We show that by switching to explicit stabilized stochastic methods and balancing the stabilization procedure simultaneously with the hierarchical sampling strategy of MLMC methods, the computational cost for stiff systems is significantly reduced, while keeping the computational algorithm fully explicit and easy to implement. Numerical experiments on linear and nonlinear stochastic differential equations and on a stochastic partial differential equation illustrate the performance of the stabilized MLMC method and corroborate our theoretical findings.

Abdulle, Assyr; Blumenthal, Adrian

2013-10-01

65

Comparison of vectorization methods used in a Monte Carlo code

This paper examines vectorization methods used in Monte Carlo codes for particle transport calculations. Event and zone selection methods developed from conventional all-zone and one-zone algorithms have been implemented in a general-purpose vectorized code, GMVP. Moreover, a vectorization procedure to treat multiple-lattice geometry has been developed using these methods. Use of lattice geometry can reduce the computation cost for a typical pressurized water reactor fuel subassembly calculation, especially when the zone selection method is used. Sample calculations for external and fission source problems are used to compare the performances of both methods with the results of conventional scalar codes. Though the speedup resulting from vectorization depends on the problem solved, a factor of 7 to 10 is obtained for practical problems on the FACOM VP-100 computer compared with the conventional scalar code, MORSE-CG.

Nakagawa, M.; Mori, T.; Sasaki, M. (Japan Atomic Energy Research Inst., Tokai Establishment Tokai-mura, Ibaraki-ken 319-11 (JP))

1991-01-01

66

A simple eigenfunction convergence acceleration method for Monte Carlo

Monte Carlo transport codes typically use a power iteration method to obtain the fundamental eigenfunction. The standard convergence rate for the power iteration method is the ratio of the first two eigenvalues, that is, k{sub 2}/k{sub 1}. Modifications to the power method have accelerated the convergence by explicitly calculating the subdominant eigenfunctions as well as the fundamental. Calculating the subdominant eigenfunctions requires using particles of negative and positive weights and appropriately canceling the negative and positive weight particles. Incorporating both negative weights and a {+-} weight cancellation requires a significant change to current transport codes. This paper presents an alternative convergence acceleration method that does not require modifying the transport codes to deal with the problems associated with tracking and cancelling particles of {+-} weights. Instead, only positive weights are used in the acceleration method.

Booth, Thomas E [Los Alamos National Laboratory

2010-11-18

67

Isochronal sampling in non-Boltzmann Monte Carlo methods

NASA Astrophysics Data System (ADS)

Non-Boltzmann sampling (NBS) methods are usually able to overcome ergodicity issues which conventional Monte Carlo methods often undergo. In short, NBS methods are meant to broaden the sampling range of some suitable order parameter (e.g., energy). For many years, a standard for their development has been the choice of sampling weights that yield uniform sampling of a predefined parameter range. However, Trebst et al. [Phys. Rev. E 70, 046701 (2004)] demonstrated that better results are obtained by choosing weights that reduce as much as possible the average number of steps needed to complete a roundtrip in that range. In the present work, we prove that the method they developed to minimize roundtrip times also equalizes downtrip and uptrip times. Then, we propose a discrete-parameter extension using such isochronal character as our main goal. To assess the features of the new method, we carry out simulations of a spin system and of lattice chains designed to exhibit folding transition, thus being suitable models for proteins. Our results show that the new method performs on a par with the original method when the latter is applicable. However, there are cases in which the method of Trebst et al. becomes inapplicable, depending on the chosen order parameter and on the employed Monte Carlo moves. With a practical example, we demonstrate that our method can naturally handle these cases, thus being more robust than the original one. Finally, we find an interesting correspondence between the kind of approach dealt with here and the committor analysis of reaction coordinates, which is another topic of rising interest in the field of molecular simulation.

Abreu, Charlles R. A.

2009-10-01

68

Application of Monte Carlo methods in tomotherapy and radiation biophysics

NASA Astrophysics Data System (ADS)

Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published experimental and theoretical studies for 60Co gamma-rays and low-energy x-rays. The reported studies provide new information about the potential biological consequences of diagnostic x-rays and selected gamma-emitting radioisotopes used in brachytherapy for the treatment of cancer. The proposed methodology is computationally efficient and may also be useful in proton therapy, space applications or internal dosimetry.

Hsiao, Ya-Yun

69

ITER Neutronics Modeling Using Hybrid Monte Carlo/Deterministic and CAD-Based Monte Carlo Methods

The immense size and complex geometry of the ITER experimental fusion reactor require the development of special techniques that can accurately and efficiently perform neutronics simulations with minimal human effort. This paper shows the effect of the hybrid Monte Carlo (MC)/deterministic techniques - Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) - in enhancing the efficiency of the neutronics modeling of ITER and demonstrates the applicability of coupling these methods with computer-aided-design-based MC. Three quantities were calculated in this analysis: the total nuclear heating in the inboard leg of the toroidal field coils (TFCs), the prompt dose outside the biological shield, and the total neutron and gamma fluxes over a mesh tally covering the entire reactor. The use of FW-CADIS in estimating the nuclear heating in the inboard TFCs resulted in a factor of ~ 275 increase in the MC figure of merit (FOM) compared with analog MC and a factor of ~ 9 compared with the traditional methods of variance reduction. By providing a factor of ~ 21 000 increase in the MC FOM, the radiation dose calculation showed how the CADIS method can be effectively used in the simulation of problems that are practically impossible using analog MC. The total flux calculation demonstrated the ability of FW-CADIS to simultaneously enhance the MC statistical precision throughout the entire ITER geometry. Collectively, these calculations demonstrate the ability of the hybrid techniques to accurately model very challenging shielding problems in reasonable execution times.

Ibrahim, A. [University of Wisconsin; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL; Peplow, Douglas E. [ORNL; Sawan, M. [University of Wisconsin; Wilson, P. [University of Wisconsin; Wagner, John C [ORNL; Heltemes, Thad [University of Wisconsin, Madison

2011-01-01

70

Application of Exchange Monte Carlo Method to Ordering Dynamics

NASA Astrophysics Data System (ADS)

The ordering dynamics in the spinodal decomposition is an interesting problem. Especially, for the case of the conserved order parameter, it is difficult to determine the late-stage growth law due to the slow dynamics. We apply the exchange Monte Carlo method [1] to the ordering dynamics of the three-state Potts model with the conserved order parameter. Even for the deeply quenched case to low tempreatures, we have observed a rapid domain growth; we have proved the efficiency of the exchange Monte Carlo method for the ordering process. Although the exchange dynamics is not considered to be related to a real one, we have found that a domain growth is controlled by a simple algebraic growth law, R(t) ~ t^1/3. The value is consistent with a direct simulation [2] for the same model. [1] K. Hukushima and K. Nemoto, J. Phys. Soc. Jpn. 65, 1604 (1996). [2] C. Jeppesen and O. G. Mouritsen, Phys. Rev. B47, 14724 (1993).

Okabe, Yutaka

1997-08-01

71

Adjoint Monte Carlo Methods for Radiation Therapy Treatment Planning

Intensity-modulated radiation therapy is a new technique for administering external beam radiation therapy. This technology modulates the intensity and shape of the treatment beam as a function of source position and patient anatomy. This process of conforming the source to the patient requires the optimization of the independent variables of the source field. In this study, adjoint Monte Carlo methods were used to compute the sensitivity field that corresponds to a prescribed dose distribution. Given these data, linear and nonlinear optimization models were constructed with a simplified geometry to compute an optimized set of beams to deliver a desired dose distribution. It was shown that, for a simple geometric model, adjoint Monte Carlo methods can be used as the basis for inverse radiation therapy treatment planning. By using flux-to-dose conversion factors as adjoint sources, it is possible to develop an influence matrix that provides the sensitivity of the dose at a single point in the patient to all points in the treatment source field. These data may be used to determine an optimized set of treatment beams.

Kowalok, M.; Henderson, D.L.; Mackie, T.R.

2001-06-17

72

The derivation of Particle Monte Carlo methods for plasma modeling from transport equations

We analyze here in some detail, the derivation of the Particle and Monte Carlo methods of plasma simulation, such as Particle in Cell (PIC), Monte Carlo (MC) and Particle in Cell \\/ Monte Carlo (PIC\\/MC) from formal manipulation of transport equations.

Savino Longo

2008-01-01

73

Test of the Monte Carlo method for the orbital evolution of short-period comets

NASA Astrophysics Data System (ADS)

The use of Monte Carlo modeling for short-period comets is discussed. Comparative time evolutions of the exact versus Monte Carlo mappings are presented. It is shown that the Monte Carlo method should be restricted to fully chaotic regimes where parasitic diffusion is insignificant.

Baille, Ph.; Froeschle, Cl.

1990-08-01

74

Quasi-Monte Carlo Methods in Computer Graphics, Part I: The QMC-Buffer

Monte Carlo integration is often used for antialiasing in rendering processes. Due to low sampling rates only expected error estimates can be stated, and the variance can be high. In this article quasi-Monte Carlo methods are presented, achieving a guaranteed upper error bound and a convergence rate essentially as fast as usual Monte Carlo.

Stefan Heinrich; Alexander Keller

1994-01-01

75

Continuous-time Monte Carlo methods for quantum impurity models

NASA Astrophysics Data System (ADS)

Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of correlated electron materials as auxiliary problems whose solution gives the dynamical mean-field approximation to the self-energy and local correlation functions. These applications require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms reviewed in this article meet this challenge. Derivations and descriptions of the algorithms are presented in enough detail to allow other workers to write their own implementations, discuss the strengths and weaknesses of the methods, summarize the problems to which the new methods have been successfully applied, and outline prospects for future applications.

Gull, Emanuel; Millis, Andrew J.; Lichtenstein, Alexander I.; Rubtsov, Alexey N.; Troyer, Matthias; Werner, Philipp

2011-04-01

76

A Monte Carlo Method for Calculating Initiation Probability

A Monte Carlo method for calculating the probability of initiating a self-sustaining neutron chain reaction has been developed. In contrast to deterministic codes which solve a non-linear, adjoint form of the Boltzmann equation to calculate initiation probability, this new method solves the forward (standard) form of the equation using a modified source calculation technique. Results from this new method are compared with results obtained from several deterministic codes for a suite of historical test problems. The level of agreement between these code predictions is quite good, considering the use of different numerical techniques and nuclear data. A set of modifications to the historical test problems has also been developed which reduces the impact of neutron source ambiguities on the calculated probabilities.

Greenman, G M; Procassini, R J; Clouse, C J

2007-03-05

77

Adaptive Monte Carlo methods for matrix equations with applications

NASA Astrophysics Data System (ADS)

This paper discusses empirical studies with both the adaptive correlated sequential sampling method and the adaptive importance sampling method which can be used in solving matrix and integral equations. Both methods achieve geometric convergence (provided the number of random walks per stage is large enough) in the sense: e[nu]<=c[lambda][nu], where e[nu] is the error at stage [nu], [lambda][set membership, variant](0,1) is a constant, c>0 is also a constant. Thus, both methods converge much faster than the conventional Monte Carlo method. Our extensive numerical test results show that the adaptive importance sampling method converges faster than the adaptive correlated sequential sampling method, even with many fewer random walks per stage for the same problem. The methods can be applied to problems involving large scale matrix equations with non-sparse coefficient matrices. We also provide an application of the adaptive importance sampling method to the numerical solution of integral equations, where the integral equations are converted into matrix equations (with order up to 8192×8192) after discretization. By using Niederreiter's sequence, instead of a pseudo-random sequence when generating the nodal point set used in discretizing the phase space [Gamma], we find that the average absolute errors or relative errors at nodal points can be reduced by a factor of more than one hundred.

Lai, Yongzeng

2009-09-01

78

New Monte Carlo method for the self-avoiding walk

NASA Astrophysics Data System (ADS)

We introduce a new Monte Carlo algorithm for the self-avoiding walk (SAW), and show that it is particularly efficient in the critical region (long chains). We also introduce new and more efficient statistical techniques. We employ these methods to extract numerical estimates for the critical parameters of the SAW on the square lattice. We find ?=2.63820 ± 0.00004 ± 0.00030 ?=1.352 ± 0.006 ± 0.025 ?v=0.7590 ± 0.0062 ± 0.0042 where the first error bar represents systematic error due to corrections to scaling (subjective 95% confidence limits) and the second bar represents statistical error (classical 95% confidence limits). These results are based on SAWs of average length ? 166, using 340 hours CPU time on a CDC Cyber 170-730. We compare our results to previous work and indicate some directions for future research.

Berretti, Alberto; Sokal, Alan D.

1985-08-01

79

Grand-canonical Monte Carlo method for Donnan equilibria

NASA Astrophysics Data System (ADS)

We present a method that enables the direct simulation of Donnan equilibria. The method is based on a grand-canonical Monte Carlo scheme that properly accounts for the unequal partitioning of small ions on the two sides of a semipermeable membrane, and can be used to determine the Donnan electrochemical potential, osmotic pressure, and other system properties. Positive and negative ions are considered separately in the grand-canonical moves. This violates instantaneous charge neutrality, which is usually considered a prerequisite for simulations using the Ewald sum to compute the long-range charge-charge interactions. In this work, we show that if the system is neutral only in an average sense, it is still possible to get reliable results in grand-canonical simulations of electrolytes performed with Ewald summation of electrostatic interactions. We compare our Donnan method with a theory that accounts for differential partitioning of the salt, and find excellent agreement for the electrochemical potential, the osmotic pressure, and the salt concentrations on the two sides. We also compare our method with experimental results for a system of charged colloids confined by a semipermeable membrane and to a constant-NVT simulation method, which does not account for salt partitioning. Our results for the Donnan potential are much closer to the experimental results than the constant-NVT method, highlighting the important effect of salt partitioning on the Donnan potential.

Barr, S. A.; Panagiotopoulos, A. Z.

2012-07-01

80

Micromacro simulation of sintering process by coupling Monte Carlo and finite element methods

A micromacro method for simulating a sintering process of ceramic powder compacts based on the Monte Carlo and finite element methods is proposed. Macroscopic non-uniform shrinkage during the sintering is calculated by the viscoplastic finite element method. In the microscopic approach using the Monte Carlo method, powder particles and pores among the particles are divided into many cells, and the

K Mori; H Matsubara; N Noguchi

2004-01-01

81

Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination

Objective To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Methods Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or ?-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a 252Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV DD neutron generator can create neutrons at up to 1013 n s?1 with current technology. All these enable an effective and low-cost method of killing anthrax spores. Results There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. Conclusion The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g 252Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV DD neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV DD neutron generator output >1013 n s?1 should be attainable in the near future. This indicates that we could use a DD neutron generator to sterilise anthrax contamination within several seconds.

Liu, B; Xu, J; Liu, T; Ouyang, X

2012-01-01

82

Underwater Optical Wireless Channel Modeling Using Monte-Carlo Method

NASA Astrophysics Data System (ADS)

At present, there is a lot of interest in the functioning of the marine environment. Unmanned or Autonomous Underwater Vehicles (UUVs or AUVs) are used in the exploration of the underwater resources, pollution monitoring, disaster prevention etc. Underwater, where radio waves do not propagate, acoustic communication is being used. But, underwater communication is moving towards Optical Communication which has higher bandwidth when compared to Acoustic Communication but has shorter range comparatively. Underwater Optical Wireless Communication (OWC) is mainly affected by the absorption and scattering of the optical signal. In coastal waters, both inherent and apparent optical properties (IOPs and AOPs) are influenced by a wide array of physical, biological and chemical processes leading to optical variability. The scattering effect has two effects: the attenuation of the signal and the Inter-Symbol Interference (ISI) of the signal. However, the Inter-Symbol Interference is ignored in the present paper. Therefore, in order to have an efficient underwater OWC link it is necessary to model the channel efficiently. In this paper, the underwater optical channel is modeled using Monte-Carlo method. The Monte Carlo approach provides the most general and most flexible technique for numerically solving the equations of Radiative transfer. The attenuation co-efficient of the light signal is studied as a function of the absorption (a) and scattering (b) coefficients. It has been observed that for pure sea water and for less chlorophyll conditions blue wavelength is less absorbed whereas for chlorophyll rich environment red wavelength signal is absorbed less comparative to blue and green wavelength.

Saini, P. Sri; Prince, Shanthi

2011-10-01

83

Sequential Monte Carlo Methods to Train Neural Network Models

We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent\\/sampling importance resampling algorithm (HySIR). In terms of computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequential Monte Carlo techniques. The new algorithm may be viewed as a global optimization strategy that allows us to

Joćo F. G. De Freitas; Mahesan Niranjan; Andrew H. Gee; Arnaud Doucet

2000-01-01

84

Efficient, automated Monte Carlo methods for radiation transport

Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed

Rong Kong; Martin Ambrose; Jerome Spanier

2008-01-01

85

Uncertainty evaluation in robot calibration by Monte Carlo method

NASA Astrophysics Data System (ADS)

In this paper it is presented a technique to evaluate the calibration uncertainty for a robot arm calibrated by circle point analysis method. The method developed, based on probability distribution propagation calculation recommended by the Guide to the Expression of Uncertainty of Measurement, and on Monte Carlo method, makes it possible to calculate uncertainty in the identification of each robot single parameter, and thus to estimate robot positioning uncertainty in accordance to its calibration uncertainty, and not according to a set of single locations and orientations previously defined for a unique set of identified parameters. Besides, this technique allows beforehand to establish the best possible conditions for the data capture test for the identification, which turn out to have the less possible calibration uncertainty, according to the variables involved in the data capture process for the identification, by propagating their influence up to final robot accuracy. Currently, the results validity of a robot calibration procedure is expressed generally in terms of position and orientation error in a set of locations and orientations. The method presented is the first evaluation in the literature for that validity in terms of calibration uncertainty around the whole work volume.

Santolaria, J.; Ginés, M.; Vila, L.; Brau, A.; Aguilar, J. J.

2012-04-01

86

Neutronic analysis code for fuel assembly using a vectorized Monte Carlo method

A fuel assembly analysis code, VMONT, in which a multigroup neutron transport calculation is combined with a burnup calculation, has been developed for comprehensive design work use. The neutron transport calculation is performed with a vectorized Monte Carlo method that can realize speeds {gt}10 times faster than those of a scalar Monte Carlo method. The validity of the VMONT code is shown through test calculations against continuous energy Monte Carlo calculations and the proteus tight lattice experiment.

Morimoto, Y.; Maruyama, H.; Ishii, K.; Aoyama, M. (Hitachi Ltd., Ibaraki (Japan). Energy Research Lab.)

1989-12-01

87

An automated variance reduction method for global Monte Carlo neutral particle transport problems

NASA Astrophysics Data System (ADS)

A method to automatically reduce the variance in global neutral particle Monte Carlo problems by using a weight window derived from a deterministic forward solution is presented. This method reduces a global measure of the variance of desired tallies and increases its associated figure of merit. Global deep penetration neutron transport problems present difficulties for analog Monte Carlo. When the scalar flux decreases by many orders of magnitude, so does the number of Monte Carlo particles. This can result in large statistical errors. In conjunction with survival biasing, a weight window is employed which uses splitting and Russian roulette to restrict the symbolic weights of Monte Carlo particles. By establishing a connection between the scalar flux and the weight window, two important concepts are demonstrated. First, such a weight window can be constructed from a deterministic solution of a forward transport problem. Also, the weight window will distribute Monte Carlo particles in such a way to minimize a measure of the global variance. For Implicit Monte Carlo solutions of radiative transfer problems, an inefficient distribution of Monte Carlo particles can result in large statistical errors in front of the Marshak wave and at its leading edge. Again, the global Monte Carlo method is used, which employs a time-dependent weight window derived from a forward deterministic solution. Here, the algorithm is modified to enhance the number of Monte Carlo particles in the wavefront. Simulations show that use of this time-dependent weight window significantly improves the Monte Carlo calculation.

Cooper, Marc Andrew

88

Seriation in Paleontological Data Using Markov Chain Monte Carlo Methods

Given a collection of fossil sites with data about the taxa that occur in each site, the task in biochronology is to find good estimates for the ages or ordering of sites. We describe a full probabilistic model for fossil data. The parameters of the model are natural: the ordering of the sites, the origination and extinction times for each taxon, and the probabilities of different types of errors. We show that the posterior distributions of these parameters can be estimated reliably by using Markov chain Monte Carlo techniques. The posterior distributions of the model parameters can be used to answer many different questions about the data, including seriation (finding the best ordering of the sites) and outlier detection. We demonstrate the usefulness of the model and estimation method on synthetic data and on real data on large late Cenozoic mammals. As an example, for the sites with large number of occurrences of common genera, our methods give orderings, whose correlation with geochronologic ages is 0.95.

Puolamaki, Kai; Fortelius, Mikael; Mannila, Heikki

2006-01-01

89

MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD

The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.

K. HANSON

2001-02-01

90

Noise suppression and barrier crossing in Monte Carlo image-restoration method

NASA Astrophysics Data System (ADS)

In this paper, an efficient approach for image restoration of noisy data is suggested. This approach combines the Monte Carlo image restoration technique and the Morrison noise removal methods. The mean squared error (MSE) criterion is used to test the performance of the Monte Carlo method with and without prior-application of the Morrison noise removal method. The methods for facilitating the Monte Carlo walk to the brightest regions of the image are discussed and a new approach is suggested. It is shown that the Monte Carlo technique is potentially very fast with good resolution. The Morrison noise removal method smoothes the data at the first iteration and proceeds to restore the data back to its original noisy form at later iterations. To achieve some noise suppression, one can stop the Morrison iterations before it converges to the original noisy form. The Monte Carlo method is then applied to the noise suppressed data.

Amini, Abolfazl M.

1995-03-01

91

Predicting three-body abrasive wear using Monte Carlo methods

Predicting wear of materials under three-body abrasion is a challenging project, since three-body abrasion is more complicated than two-body abrasion. In this paper, a Monte Carlo model for simulating plastic deformation wear rate, i.e. low-cycle fatigue wear rate, is proposed. The MansonCoffin formula and the PalmgromMiner linear accumulated-damage principle were used in the model as well as the Monte Carlo

Liang Fang; Weimin Liu; Daoshan Du; Xiaofeng Zhang; Qunji Xue

2004-01-01

92

Monte Carlo simulation is used with predilection when multidimensional problems are discussed (eg, the outcome depends on more variables or risk factors). The method was invented by American scientists in 1940 when it was used to simulate the trajectory of a neutron in uranium or plutonium. Monte Carlo method, the real is replaced by an artificial process. To obtain accurate

Ciprian Apostol

2009-01-01

93

Quasi-Monte Carlo Methods in Computer Graphics, Part II: The Radiance Equation

The radiance equation, which describes the global illumination problem in computer graphics, is a high dimensional integral equation. Estimates of the solu- tion are usually computed on the basis of Monte Carlo methods. In this paper we propose and investigate quasi-Monte Carlo methods, which means that we replace (pseudo-) random samples by low discrepancy sequences, yielding deterministic algorithms. We carry

Stefan Heinrich; Alexander Keller

1994-01-01

94

Spike inference from calcium imaging using sequential Monte Carlo methods.

As recent advances in calcium sensing technologies facilitate simultaneously imaging action potentials in neuronal populations, complementary analytical tools must also be developed to maximize the utility of this experimental paradigm. Although the observations here are fluorescence movies, the signals of interest--spike trains and/or time varying intracellular calcium concentrations--are hidden. Inferring these hidden signals is often problematic due to noise, nonlinearities, slow imaging rate, and unknown biophysical parameters. We overcome these difficulties by developing sequential Monte Carlo methods (particle filters) based on biophysical models of spiking, calcium dynamics, and fluorescence. We show that even in simple cases, the particle filters outperform the optimal linear (i.e., Wiener) filter, both by obtaining better estimates and by providing error bars. We then relax a number of our model assumptions to incorporate nonlinear saturation of the fluorescence signal, as well external stimulus and spike history dependence (e.g., refractoriness) of the spike trains. Using both simulations and in vitro fluorescence observations, we demonstrate temporal superresolution by inferring when within a frame each spike occurs. Furthermore, the model parameters may be estimated using expectation maximization with only a very limited amount of data (e.g., approximately 5-10 s or 5-40 spikes), without the requirement of any simultaneous electrophysiology or imaging experiments. PMID:19619479

Vogelstein, Joshua T; Watson, Brendon O; Packer, Adam M; Yuste, Rafael; Jedynak, Bruno; Paninski, Liam

2009-07-22

95

Quantum Monte Carlo methods and lithium cluster properties

Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

Owen, R.K.

1990-12-01

96

Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters

Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

Owen, R.K.

1990-12-01

97

Markov chain Monte Carlo posterior sampling with the Hamiltonian method.

A major advantage of Bayesian data analysis is that provides a characterization of the uncertainty in the model parameters estimated from a given set of measurements in the form of a posterior probability distribution. When the analysis involves a complicated physical phenomenon, the posterior may not be available in analytic form, but only calculable by means of a simulation code. In such cases, the uncertainty in inferred model parameters requires characterization of a calculated functional. An appealing way to explore the posterior, and hence characterize the uncertainty, is to employ the Markov Chain Monte Carlo technique. The goal of MCMC is to generate a sequence random of parameter x samples from a target pdf (probability density function), {pi}(x). In Bayesian analysis, this sequence corresponds to a set of model realizations that follow the posterior distribution. There are two basic MCMC techniques. In Gibbs sampling, typically one parameter is drawn from the conditional pdf at a time, holding all others fixed. In the Metropolis algorithm, all the parameters can be varied at once. The parameter vector is perturbed from the current sequence point by adding a trial step drawn randomly from a symmetric pdf. The trial position is either accepted or rejected on the basis of the probability at the trial position relative to the current one. The Metropolis algorithm is often employed because of its simplicity. The aim of this work is to develop MCMC methods that are useful for large numbers of parameters, n, say hundreds or more. In this regime the Metropolis algorithm can be unsuitable, because its efficiency drops as 0.3/n. The efficiency is defined as the reciprocal of the number of steps in the sequence needed to effectively provide a statistically independent sample from {pi}.

Hanson, Kenneth M.

2001-01-01

98

Integration of the Lippmann-Schwinger equation with the Monte Carlo method

A Monte Carlo method to integrate the Lippmann-Schwinger equation for elastic scattering is presented. Advantages and limitations of this algorithm are compared with traditional methods of computation.

Salomon, M.

1983-12-01

99

NASA Astrophysics Data System (ADS)

The diagrammatic Monte Carlo (DiagMC) method is a numerical technique which samples the entire diagrammatic series of the Green's function in quantum many-body systems. In this work, we incorporate the flat histogram principle in the diagrammatic Monte Carlo method, and we term the improved version the flat histogram diagrammatic Monte Carlo method. We demonstrate the superiority of this method over the standard DiagMC in extracting the long-imaginary-time behavior of the Green's function, without incorporating any a priori knowledge about this function, by applying the technique to the polaron problem.

Diamantis, Nikolaos G.; Manousakis, Efstratios

2013-10-01

100

New Monte Carlo method for planar Poisson Voronoi cells

By a new Monte Carlo algorithm, we evaluate the sidedness probability pn of a planar Poisson-Voronoi cell in the range 3 <= n <= 1600. The algorithm is developed on the basis of earlier theoretical work; it exploits, in particular, the known asymptotic behaviour of pn as n --> ?. Our pn values all have between four and six significant

H. J. Hilhorst

2007-01-01

101

New Monte Carlo method for planar PoissonVoronoi cells

By a new Monte Carlo algorithm, we evaluate the sidedness probability pn of a planar PoissonVoronoi cell in the range 3 ? n ? 1600. The algorithm is developed on the basis of earlier theoretical work; it exploits, in particular, the known asymptotic behaviour of pn as n ? ?. Our pn values all have between four and six significant

H. J. Hilhorst

2007-01-01

102

A Monte Carlo method for calculating orbits of comets

The present work is divided into two stages: 1. By using large numbers (several millions) of accurate orbit integrations with the K-S regularization, probability distributions for changes in the orbital elements of comets during encounters with planets are evaluated. 2. These distributions are used in a Monte Carlo simulation scheme which follows the evolution of orbits under repeated close encounters.

J. Q. Zheng; M. J. Valtonen; S. Mikkola; J. J. Matese; P. G. Whitman; H. Rickman

1994-01-01

103

Comparison of number-theoretic and Monte Carlo methods in combat simulation

NASA Astrophysics Data System (ADS)

Number-theoretic methods (NTM) or quasi-Monte Carlo methods are a class of techniques to generate points of the uniform distribution in the s-dimensional unit cube. NTM is a special method, which represents a combination of number theory and numerical analysis. The uniformly scattered set of points in the unit cube obtained by NTM is usually called a set of quasi-random numbers or a number-theoretic net (NT-net), since it may used instead of random numbers in many statistical problems. NT-net can be defined as representative points of the uniform distribution. There are different criterions to measure uniformity and methods how to generate NT-nets. Theoretically the rate of convergence of the NTM is better when compared to the Monte Carlo method. The high-resolution force-on-force combat simulation is usually modeled as stochastic Monte Carlo type model and discrete event system. In high-resolution Monte Carlo combat simulations a large amount of random numbers has to be generated. In Monte Carlo type combat simulation models every unit has certain probabilities for detecting and affecting each enemy unit at each time interval. Usually Monte Carlo method is used to calculate expected value of some property of the model. This is matter of numerical integration with Monte Carlo method. In this paper the effectiveness of NTM's are compared with Monte Carlo method in simulated high-resolution combat simulation case. Some methods how to generate NT-nets are introduced. The estimates of NTM and Monte Carlo simulations are studied by comparing statistical properties of the estimates.

Helle, Sami

2003-09-01

104

This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling

John C Wagner; Scott W Mosher; Thomas M Evans; Douglas E. Peplow; John A Turner

2011-01-01

105

This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform ''real'' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling

John C Wagner; Scott W Mosher; Thomas M Evans; Douglas E. Peplow; John A Turner

2010-01-01

106

Efficient, automated Monte Carlo methods for radiation transport

Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.

Kong Rong; Ambrose, Martin [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Spanier, Jerome [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Beckman Laser Institute and Medical Clinic, University of California, 1002 Health Science Road E., Irvine, CA 92612 (United States)], E-mail: jspanier@uci.edu

2008-11-20

107

Hybrid Monte Carlo-Deterministic Methods for Nuclear Reactor-Related Criticality Calculations

The overall goal of this project is to develop, implement, and test new Hybrid Monte Carlo-deterministic (or simply Hybrid) methods for the more efficient and more accurate calculation of nuclear engineering criticality problems. These new methods will make use of two (philosophically and practically) very different techniques - the Monte Carlo technique, and the deterministic technique - which have been developed completely independently during the past 50 years. The concept of this proposal is to merge these two approaches and develop fundamentally new computational techniques that enhance the strengths of the individual Monte Carlo and deterministic approaches, while minimizing their weaknesses.

Edward W. Larson

2004-02-17

108

Implicit Monte Carlo Methods and Non-Equilibrium Marshak Wave Radiative Transport.

National Technical Information Service (NTIS)

Two enhancements to the Fleck implicit Monte Carlo method for radiative transport are described, for use in transparent and opaque media respectively. The first introduces a spectral mean cross section, which applies to pseudoscattering in transparent reg...

J. E. Lynch

1985-01-01

109

Comparison of dual-window and convolution scatter correction techniques using the Monte Carlo method

In order to improve the accuracy of SPECT measurements, correction methods have to be used. Two different scatter correction techniques-the dual-window (DW) technique and the convolution (CV) technique-were compared using projection data, simulated by the Monte Carlo method. Comparison with measured data was also made to validate the accuracy of the Monte Carlo code. The main goal was to investigate

M. Ljungberg; P. Msaki; S.-E. Strand

1990-01-01

110

Digitally reconstructed radiograph generation by an adaptive Monte Carlo method

NASA Astrophysics Data System (ADS)

Digitally reconstructed radiograph (DRR) generation is an important step in several medical imaging applications such as 2D-3D image registration, where the generation of DRR is a rate-limiting step. We present a novel DRR generation technique, called the adaptive Monte Carlo volume rendering (AMCVR) algorithm. It is based on the conventional Monte Carlo volume rendering (MCVR) technique that is very efficient for rendering large medical datasets. In contrast to the MCVR, the AMCVR does not produce sample points by sampling directly in the entire volume domain. Instead, it adaptively divides the entire volume domain into sub-domains using importance separation and then performs sampling in these sub-domains. As a result, the AMCVR produces almost the same image quality as that obtained with the MCVR while only using half samples, and increases projection speed by a factor of 2. Moreover, the AMCVR is suitable for fast memory addressing, which further improves processing speed. Independent of the size of medical datasets, the AMCVR allows for achieving a frame rate of about 15 Hz on a 2.8 GHz Pentium 4 PC while generating reasonably good quality DRR.

Li, Xiaoliang; Yang, Jie; Zhu, Yuemin

2006-06-01

111

Monte Carlo Criticality Methods and Analysis Capabilities in SCALE

This paper describes the Monte Carlo codes KENO V.a and KENO-VI in SCALE that are primarily used to calculate multiplication factors and flux distributions of fissile systems. Both codes allow explicit geometric representation of the target systems and are used internationally for safety analyses involving fissile materials. KENO V.a has limiting geometric rules such as no intersections and no rotations. These limitations make KENO V.a execute very efficiently and run very fast. On the other hand, KENO-VI allows very complex geometric modeling. Both KENO codes can utilize either continuous-energy or multigroup cross-section data and have been thoroughly verified and validated with ENDF libraries through ENDF/B-VII.0, which has been first distributed with SCALE 6. Development of the Monte Carlo solution technique and solution methodology as applied in both KENO codes is explained in this paper. Available options and proper application of the options and techniques are also discussed. Finally, performance of the codes is demonstrated using published benchmark problems.

Goluoglu, Sedat [ORNL; Petrie Jr, Lester M [ORNL; Dunn, Michael E [ORNL; Hollenbach, Daniel F [ORNL; Rearden, Bradley T [ORNL

2011-01-01

112

Ultracold atoms at unitarity within quantum Monte Carlo methods

Variational and diffusion quantum Monte Carlo (VMC and DMC) calculations of the properties of the zero-temperature fermionic gas at unitarity are reported. Our study differs from earlier ones mainly in that we have constructed more accurate trial wave functions and used a larger system size, we have studied the dependence of the energy on the particle density and well width, and we have achieved much smaller statistical error bars. The correct value of the universal ratio of the energy of the interacting to that of the noninteracting gas, {xi}, is still a matter of debate. We find DMC values of {xi} of 0.4244(1) with 66 particles and 0.4339(1) with 128 particles. The spherically averaged pair-correlation functions, momentum densities, and one-body density matrices are very similar in VMC and DMC, which suggests that our results for these quantities are very accurate. We find, however, some differences between the VMC and DMC results for the two-body density matrices and condensate fractions, which indicates that these quantities are more sensitive to the quality of the trial wave function. Our best estimate of the condensate fraction of 0.51 is smaller than the values from earlier quantum Monte Carlo calculations.

Morris, Andrew J.; Lopez Rios, P.; Needs, R. J. [Theory of Condensed Matter Group, Cavendish Laboratory, University of Cambridge, J.J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom)

2010-03-15

113

A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan [TAPIR, California Institute of Technology, MC 350-17, 1200 E California Blvd., Pasadena, CA 91125 (United States); Burrows, Adam; Dolence, Joshua C. [Department of Astrophysical Sciences, Princeton University, Peyton Hall, Ivy Lane, Princeton, NJ 08544 (United States); Loeffler, Frank; Schnetter, Erik, E-mail: abdik@tapir.caltech.edu [Center for Computation and Technology, Louisiana State University, 216 Johnston Hall, Baton Rouge, LA 70803 (United States)

2012-08-20

114

A New Monte Carlo Method for Time-dependent Neutrino Radiation Transport

NASA Astrophysics Data System (ADS)

Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck & Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

Abdikamalov, Ernazar; Burrows, Adam; Ott, Christian D.; Löffler, Frank; O'Connor, Evan; Dolence, Joshua C.; Schnetter, Erik

2012-08-01

115

Time-step limits for a Monte Carlo Compton-scattering method

We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.

Densmore, Jeffery D [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Lowrie, Robert B [Los Alamos National Laboratory

2009-01-01

116

A scatter correction method for Tl201 images: a Monte Carlo investigation

Results from the application of a modified dual photopeak window (DPW) scatter correction method to Monte Carlo simulated Tl-201 emission images are presented. In the Monte Carlo investigation, individual simulations were performed for six radiation emissions of Tl-201. For each emission, point sources of Tl-201 were imaged at various locations in a water-filled elliptical tub phantom using three energy windows:

George J. Hademenos; Michael A. King; Michael Ljungberg; I. George Zubal; Charles R. Harrell

1992-01-01

117

A scatter correction method for Tl201 images: A Monte Carlo investigation

Results from the application of a modified dual photopeak window scatter correction method to Monte Carlo simulated Tl-201 emission images are presented. In the Monte Carlo investigation, individual simulations are performed for six radiation emissions of Tl-201. For each emission, point sources of Tl-201 are imaged at various locations in a water-filled elliptical tub phantom using three energy windows. The

George J. Hademenos; Michael A. King; Michael Ljungberg; I. George Zubal; Charles R. Harrell

1993-01-01

118

Monte Carlo methods for the in vivo analysis of Cisplatin using x-ray fluorescence

A Monte Carlo method has been used to model the measument of cisplatin uptake with in vivo X-ray fluorescence. A user-code has been written for the EGS4 Monte Carlo system that incorporates linear polarisation and multiple element fluorescence extensoions. The yield of fluorescent photons to the mainly Compton scattered background is computed for our detector arrangement. The detector consists of

R. P. Hugtenburg; J. R. Turner; D. M. Mannering; B. A. Robinson

1998-01-01

119

Dynamic Monte Carlo self-modeling curve resolution method for multicomponent mixtures

A new algorithm for performing self-modeling curve resolution (SMCR) on second-order bilinear data sets is described. The new method, called Dynamic Monte Carlo SMCR (DMC-SMCR), seeks to define boundaries of allowable pure component profiles (spectra, concentrations, etc.) in mixture analysis. The algorithm employs a directed Monte Carlo approach to search for valid solutions with high efficiency. The parameters for the

Marc N. Leger; Peter D. Wentzell

2002-01-01

120

Equivalence of four Monte Carlo methods for photon migration in turbid media.

In the field of photon migration in turbid media, different Monte Carlo methods are usually employed to solve the radiative transfer equation. We consider four different Monte Carlo methods, widely used in the field of tissue optics, that are based on four different ways to build photons' trajectories. We provide both theoretical arguments and numerical results showing the statistical equivalence of the four methods. In the numerical results we compare the temporal point spread functions calculated by the four methods for a wide range of the optical properties in the slab and semi-infinite medium geometry. The convergence of the methods is also briefly discussed. PMID:23201658

Sassaroli, Angelo; Martelli, Fabrizio

2012-10-01

121

Markov chain Monte Carlo methods for state-space models with point process observations.

This letter considers how a number of modern Markov chain Monte Carlo (MCMC) methods can be applied for parameter estimation and inference in state-space models with point process observations. We quantified the efficiencies of these MCMC methods on synthetic data, and our results suggest that the Reimannian manifold Hamiltonian Monte Carlo method offers the best performance. We further compared such a method with a previously tested variational Bayes method on two experimental data sets. Results indicate similar performance on the large data sets and superior performance on small ones. The work offers an extensive suite of MCMC algorithms evaluated on an important class of models for physiological signal analysis. PMID:22364499

Yuan, Ke; Girolami, Mark; Niranjan, Mahesan

2012-02-24

122

New Monte Carlo method for planar Poisson Voronoi cells

NASA Astrophysics Data System (ADS)

By a new Monte Carlo algorithm, we evaluate the sidedness probability pn of a planar Poisson-Voronoi cell in the range 3 <= n <= 1600. The algorithm is developed on the basis of earlier theoretical work; it exploits, in particular, the known asymptotic behaviour of pn as n ? ?. Our pn values all have between four and six significant digits. Accurate n dependent averages, second moments and variances are obtained for the cell area and the cell perimeter. The numerical large-n behaviour of these quantities is analysed in terms of an asymptotic power series in n-1. Snapshots are shown of typical occurrences of extremely rare events, implicating cells of up to n = 1600 sides embedded in an ordinary Poisson-Voronoi diagram. We reveal and discuss the characteristic features of such many-sided cells and their immediate environment. Their relevance for observable properties is stressed.

Hilhorst, H. J.

2007-03-01

123

Hybrid Monte-Carlo method for simulating neutron and photon radiography

NASA Astrophysics Data System (ADS)

We present a Hybrid Monte-Carlo method (HMCM) for simulating neutron and photon radiographs. HMCM utilizes the combination of a Monte-Carlo particle simulation for calculating incident film radiation and a statistical post-processing routine to simulate film noise. Since the method relies on MCNP for transport calculations, it is easily generalized to most non-destructive evaluation (NDE) simulations. We verify the method's accuracy through ASTM International's E592-99 publication, Standard Guide to Obtainable Equivalent Penetrameter Sensitivity for Radiography of Steel Plates [1]. Potential uses for the method include characterizing alternative radiological sources and simulating NDE radiographs.

Wang, Han; Tang, Vincent

2013-11-01

124

Calculation of excited-state properties with an auxiliary-field Monte Carlo method

We propose a method for calculating the excited-state properties of a quantum many-fermion system based on the auxiliary-field Monte-Carlo method. We have examined our method with the exactly solvable pairing-force model and the Lipkin model. {copyright} {ital 1997} {ital The American Physical Society}

Kume, K.; Sato, K.; Umemoto, Y. [Department of Physics, Nara Womens University, Nara 630 (Japan)

1997-06-01

125

Quasi-Monte Carlo Methods in Financial Engineering: An Equivalence Principle and Dimension Reduction

Quasi-Monte Carlo (QMC) methods are playing an increasingly important role in the pricing of complex financial derivatives. For models in which the prices of the underlying assets are driven by Brownian motions, the eciency of QMC methods is known to depend crucially on the method of generating the Brownian motions. This paper focuses on the impact of various constructions. While

Xiaoqun Wang; Ian H. Sloan

2008-01-01

126

Valuing Real Capital Investments Using The Least-Squares Monte Carlo Method

The recently developed least-squares Monte Carlo (LSM) method provides a simple and efficient technique for valuing American-type options. The proposed method is applicable to the cases of compound real options, like the other numerical techniques such as finite difference and lattice methods, with the additional advantage to handle easily the cases of multiple uncertain state variables with different and complex

Sabry A. Abdel Sabour; Richard Poulin

2006-01-01

127

This study develops Bayesian methods for estimating the parameters of astochastic switching regression model. Markov Chain Monte Carlo methods, dataaugmentation, and Gibbs sampling are used to facilitate estimation of theposterior means. The main feature of these methods is that the posterior meansare estimated by the ergodic averages of samples drawn from conditionaldistributions, which are relatively simple in form and more

Maria Ana E. Odejar; Mark S. McNulty

2001-01-01

128

The S/sub N//Monte Carlo response matrix hybrid method

A hybrid method has been developed to iteratively couple S/sub N/ and Monte Carlo regions of the same problem. This technique avoids many of the restrictions and limitations of previous attempts to do the coupling and results in a general and relatively efficient method. We demonstrate the method with some simple examples.

Filippone, W.L.; Alcouffe, R.E.

1987-01-01

129

Relationship between Hirsch-Fye and weak-coupling diagrammatic quantum Monte Carlo methods.

Two weak-coupling continuous time quantum Monte Carlo (CTQMC) methods are shown to be equivalent for Hubbard-type interactions. A relation between these CTQMC methods and the Hirsch-Fye quantum Monte Carlo (HFQMC) method is established, identifying the latter as an approximation within CTQMC and providing a diagrammatic interpretation. Both HFQMC and CTQMC are shown to be equivalent when the number of time slices in HFQMC becomes infinite, implying the same degree of fermion sign problem in this limit. PMID:19518603

Mikelsons, K; Macridin, A; Jarrell, M

2009-05-13

130

NASA Astrophysics Data System (ADS)

Recently a new class of schemes, called Time Relaxed Monte Carlo (TRMC) has been introduced for the numerical solution of the Boltzmann equation of gas dynamics. The motivation is to propose a systematic framework to derive Monte Carlo methods effective near the fluid dynamic regime. Before the methods can be accepted as alternative tools to other methods, they have to show that they are able to reproduce results obtainable by well established reliable methods. In this paper a detailed comparison is performed between TRMC methods and the Majorant Frequency Scheme in the case of the space-homogeneous Boltzmann equation. In particular, the effect of finite number of particles is considered.

Russo, G.; Pareschi, L.; Trazzi, S.; Shevyrin, A.; Bondar, Ye.; Ivanov, M.

2005-05-01

131

Methods of Monte Carlo electron transport in particle-in-cell codes

An algorithm has been implemented in CCUBE and ISIS to treat electron transport in materials using a Monte Carlo method in addition to the electron dynamics determined by the self-consistent electromagnetic, relativistic, particle-in-cell simulation codes that have been used extensively to model generation of electron beams and intense microwave production. Incorporation of a Monte Carlo method to model the transport of electrons in materials (conductors and dielectrics) in a particle-in-cell code represents a giant step toward realistic simulation of the physics of charged-particle beams. The basic Monte Carlo method used in the implementation includes both scattering of electrons by background atoms and energy degradation.

Kwan, T.J.T.; Snell, C.M.

1985-01-01

132

NASA Astrophysics Data System (ADS)

This paper presents the calculation of ultrasonic beam parameters focal distance, focal length, and focal widths on X-axis and Y-axis for non-destructive testing probes. The measurement uncertainties were estimated using Monte Carlo Method, and compared to those obtained using Guide to the expression of uncertainty in measurement (GUM) approach. The results show that the mean values and the combined uncertainties are identical, but the probabilistically symmetric 95 % coverage intervals determined on the basis of the GUM uncertainty framework were more conservative than the ones achieved using Monte Carlo Method. Moreover, the calculation of the numerical tolerance between the coverage intervals obtained from Monte Carlo Method and GUM shows they are statistically different. Hence, a more conservative uncertainty approach will be achieved using GUM uncertainty framework.

Alvarenga, A. V.; Silva, C. E. R.; Costa-Felix, R. P. B.

2012-05-01

133

A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a

Anupam Sharma; Lyle N. Long

2004-01-01

134

Simulation of chemical reaction equilibria by the reaction ensemble Monte Carlo method: a review

Understanding and predicting the equilibrium behaviour of chemically reacting systems in highly non-ideal environments is critical to many fields of science and technology, including solvation, nanoporous materials, catalyst design, combustion and propulsion science, shock physics and many more. A method with recent success in predicting the equilibrium behaviour of reactions under non-ideal conditions is the reaction ensemble Monte Carlo method

C. Heath Turner; John K. Brennan; Martin Lķsal; William R. Smith; J. Karl Johnson; Keith E. Gubbins

2008-01-01

135

A kinetic Monte Carlo method was used to simulate the diffusion of reptating polymer chains across the interface. A time-resolved fluorescence technique conjunction with direct energy transfer method was used to measure the extend of diffusion of dye labeled reptating polymer chains. The diffusion of donor and acceptor labeled polymer chains between adjacent compartments was randomly generated. The fluorescence decay

Erkan Tuzel; K. Batuhan Kisacikoglu; Onder Pekcan

2003-01-01

136

Multiscale Finite-Difference-Diffusion-Monte-Carlo Method for Simulating Dendritic Solidification

We present a novel hybrid computational method to simulate accurately dendritic solidification in the low undercooling limit where the dendrite tip radius is one or more orders of magnitude smaller than the characteristic spatial scale of variation of the surrounding thermal or solutal diffusion field. The first key feature of this method is an efficient multiscale diffusion Monte Carlo (DMC)

Mathis Plapp; Alain Karma

2000-01-01

137

Comparison of the Monte Carlo adjoint-weighted and differential operator perturbation methods

Two perturbation theory methodologies are implemented for k-eigenvalue calculations in the continuous-energy Monte Carlo code, MCNP6. A comparison of the accuracy of these techniques, the differential operator and adjoint-weighted methods, is performed numerically and analytically. Typically, the adjoint-weighted method shows better performance over a larger range; however, there are exceptions.

Kiedrowski, Brian C [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory

2010-01-01

138

Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.

McKinley, M S; Brooks III, E D; Daffin, F

2004-12-13

139

Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.

Perfetti, Christopher M [ORNL; Martin, William R [University of Michigan; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL

2012-01-01

140

Comparison of Monte Carlo simulation methods for the calculation of the nucleation barrier of argon

We compare two molecular Monte Carlo simulation methods, the discrete summation method and the growth\\/decay method, which calculate the vapor-liquid nucleation free energy barrier by simulating isolated clusters of fixed size without the surrounding vapor. The methods are applied to calculations of nucleation barriers of LennardJones argon at 60K and 80K. Both of these methods are computationally efficient, as only

Antti Lauri; Joonas Merikanto; Evgeni Zapadinsky; Hanna Vehkamäki

2006-01-01

141

NASA Astrophysics Data System (ADS)

Because of steric overlap, conventional Monte Carlo methods are very inefficient for the simulation of lipid interactions at fluid lipid bilayer densities. However, the configurational bias Monte Carlo (CBMC) sampling algorithm(J. Siepmann and D. Frenkel, Molecular Physics 75), 59, 1992. can potentially accelerate convergence of Monte Carlo simulations by many orders of magnitude. As an intermediate step in the process of extending a statistical mechanical model for the gel-to-ripple phase of a lipid bilayer(W. S. McCullough, J. H. H. Perk, and H. L. Scott, J. Chem. Phys. 93), 6070, 1990. to include the ripple-to-fluid phase transition, we calculate interaction energies between disordered lipid molecule pairs utilizing the CBMC method to sample chain configurations. We present the calculated order parameters, gauche bond distribution, and other structural data. These results are compared to results from other simulations and experiment.

Clark, M. M.; Scott, H. L.

1996-03-01

142

Revised methods for few-group cross sections generation in the Serpent Monte Carlo code

This paper presents new calculation methods, recently implemented in the Serpent Monte Carlo code, and related to the production of homogenized few-group constants for deterministic 3D core analysis. The new methods fall under three topics: 1) Improved treatment of neutron-multiplying scattering reactions, 2) Group constant generation in reflectors and other non-fissile regions and 3) Homogenization in leakage-corrected criticality spectrum. The methodology is demonstrated by a numerical example, comparing a deterministic nodal diffusion calculation using Serpent-generated cross sections to a reference full-core Monte Carlo simulation. It is concluded that the new methodology improves the results of the deterministic calculation, and paves the way for Monte Carlo based group constant generation. (authors)

Fridman, E. [Reactor Safety Div., Helmholz-Zentrum Dresden-Rossendorf, POB 51 01 19, Dresden, 01314 (Germany); Leppaenen, J. [VTT Technical Research Centre of Finland, POB 1000, FI-02044 VTT (Finland)

2012-07-01

143

A new method to assess the statistical convergence of monte carlo solutions

Accurate Monte Carlo confidence intervals (CIs), which are formed with an estimated mean and an estimated standard deviation, can only be created when the number of particle histories N becomes large enough so that the central limit theorem can be applied. The Monte Carlo user has a limited number of marginal methods to assess the fulfillment of this condition, such as statistical error reduction proportional to 1/{radical}N with error magnitude guidelines and third and fourth moment estimators. A new method is presented here to assess the statistical convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores. Related work in this area includes the derivation of analytic score distributions for a two-state Monte Carlo problem. Score distribution histograms have been generated to determine when a small number of histories accounts for a large fraction of the result. This summary describes initial studies of empirical Monte Carlo history score PDFs created from score histograms of particle transport simulations. 7 refs., 1 fig.

Forster, R.A.

1991-01-01

144

A thermodynamically guided atomistic Monte Carlo methodology is presented for simulating systems beyond equilibrium by expanding the statistical ensemble to include a tensorial variable accounting for the overall structure of the system subjected to flow. For a given shear rate, the corresponding tensorial conjugate field is determined iteratively through independent nonequilibrium molecular dynamics simulations. Test simulations for the effect of flow on the conformation of a C50H102 polyethylene liquid show that the two methods (expanded Monte Carlo and nonequilibrium molecular dynamics) provide identical results. PMID:18233557

Baig, C; Mavrantzas, V G

2007-12-19

145

Neutron streaming through shield ducts using a discrete ordinates/Monte Carlo method

A common problem in shield design is determining the neutron flux that streams through ducts in shields and also that penetrates the shield after having traveled partway down the duct. Obviously the determination of the neutrons that stream down the duct can be computed in a straightforward manner using Monte Carlo techniques. On the other hand those neutrons that must penetrate a significant portion of the shield are more easily handled using discrete ordinates methods. A hybrid discrete ordinates/Monte Carlo cods, TWODANT/MC, which is an extension of the existing discrete ordinates code TWODANT, has been developed at Los Alamos to allow the efficient, accurate treatment of both streaming and deep penetration problems in a single calculation. In this paper we provide examples of the application of TWODANT/MC to typical geometries that are encountered in shield design and compare the results with those obtained using the Los Alamos Monte Carlo code MCNP{sup 3}.

Urban, W.T.; Baker, R.S.

1993-08-18

146

Adatom Density Kinetic Monte Carlo (AD-KMC): A new method for fast growth simulation

The main approach to perform growth simulations on an atomistic level is kinetic Monte Carlo (KMC). A problem with this method is that the CPU time increases exponentially with the growth temperature, making simulations exceedingly expensive. An analysis of typical KMC runs showed two characteristic time scales: t_ad which is the characteristic time for an adatom jump and t_surf which

Lorenzo Mandreoli; Joerg Neugebauer

2002-01-01

147

Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water

ERIC Educational Resources Information Center

|Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"

Gergely, John Robert

2009-01-01

148

An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.

ERIC Educational Resources Information Center

|Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions

Kim, Seock-Ho

2001-01-01

149

An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.

ERIC Educational Resources Information Center

|The accuracy of the Markov chain Monte Carlo procedure, Gibbs sampling, was considered for estimation of item and ability parameters of the one-parameter logistic model. Four data sets were analyzed to evaluate the Gibbs sampling procedure. Data sets were also analyzed using methods of conditional maximum likelihood, marginal maximum likelihood,

Kim, Seock-Ho

150

ERIC Educational Resources Information Center

|The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain

Kim, Jee-Seon; Bolt, Daniel M.

2007-01-01

151

New Brownian bridge construction in quasi-Monte Carlo methods for computational finance

Quasi-Monte Carlo (QMC) methods have been playing an important role for high-dimensional problems in computational finance. Several techniques, such as the Brownian bridge (BB) and the principal component analysis, are often used in QMC as possible ways to improve the performance of QMC. This paper proposes a new BB construction, which enjoys some interesting properties that appear useful in QMC

Junyi Lin; Xiaoqun Wang

2008-01-01

152

Variational method for estimating the rate of convergence of Markov-chain Monte Carlo algorithms

We demonstrate the use of a variational method to determine a quantitative lower bound on the rate of convergence of Markov chain Monte Carlo (MCMC) algorithms as a function of the target density and proposal density. The bound relies on approximating the second largest eigenvalue in the spectrum of the MCMC operator using a variational principle and the approach is

Fergal P. Casey; Joshua J. Waterfall; Ryan N. Gutenkunst; Christopher R. Myers; James P. Sethna

2008-01-01

153

Variational method for estimating the rate of convergence of Markov Chain Monte Carlo algorithms

We demonstrate the use of a variational method to determine a quantitative lower bound on the rate of convergence of Markov Chain Monte Carlo (MCMC) algorithms as a function of the target density and proposal density. The bound relies on approximating the second largest eigenvalue in the spectrum of the MCMC operator using a variational principle and the approach is

Fergal P. Casey; Joshua J. Waterfall; Ryan N. Gutenkunst; Christopher R. Myers; James P. Sethna

2006-01-01

154

Application of Monte Carlo Method to Solve the Neutron Kinetics Equation for a Subcritical Assembly

There is a need to understand the space-dependent kinetics of fast or thermal reactor physics and the Monte Carlo method should be implemented in kinetics codes as well. In a transient accident (for example, control rod ejection accident or loss of coolant accident), changes in the system are much slower than the prompt neutron lifetime. In the present paper, a

Kohei IWANAGA; Hiroshi SEKIMOTO; Takamasa MORI

2008-01-01

155

Neutrino transport in type II supernovae: Boltzmann solver vs. Monte Carlo method

We have coded a Boltzmann solver based on a finite difference scheme (S_N method) aiming at calculations of neutrino transport in type II supernovae. Close comparison between the Boltzmann solver and a Monte Carlo transport code has been made for realistic atmospheres of post bounce core models under the assumption of a static background. We have also investigated in detail

Shoichi Yamada; Hans-Thomas Janka; Hideyuki Suzuki

1999-01-01

156

A quasi-Monte Carlo method for computing double and other multiple integrals

The heuristic importance of the Monte Carlo method lies in the fact that it shows the possibility of computing numerically integrals in many dimensions by taking averages of integrand values at a number of points in such a way that, for a given degree of accuracy, this number does not depend substantially on the number of dimensions of the domain

S. K. Zaremba

1970-01-01

157

Applications of the Monte Carlo method in nuclear physics using the GEANT4 toolkit

The capabilities of the personal computers allow the application of Monte Carlo methods to simulate very complex problems that involve the transport of particles through matter. Among the several codes commonly employed in nuclear physics problems, the GEANT4 has received great attention in the last years, mainly due to its flexibility and possibility to be improved by the users. Differently

Maurķcio Moralles; Daniel A. B. Bonifįcio; Emico Okuno; Hélio M. Murata; Mįrcio Bottaro; Mįrio O. Menezes

2009-01-01

158

Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water

ERIC Educational Resources Information Center

Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"

Gergely, John Robert

2009-01-01

159

We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design.

Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

2011-01-01

160

A massively parallel implementation of the worldline quantum Monte Carlo method

The worldline quantum Monte Carlo method is a computational technique for studying the properties of many-electron and quantum-spin systems. In this paper, our efforts in developing an efficient implementation of this method for the massively parallel Connection Machine CM-2 are described. Why one must look beyond the obvious parallelism in the method in order to reduce interprocessor communication and increase

W. R. Somsky; J. E. Gubernatis

1992-01-01

161

Probability-Weighted Dynamic Monte Carlo Method for Reaction Kinetics Simulations

The reaction kinetics underlying the dynamic features of physical systems can be investigated by using various approaches such as the Dynamic Monte Carlo (DMC) method. The usefulness of the DMC method to study reaction kinetics has been limited to systems where the underlying reactions occur with similar frequencies, i.e., similar rate constraints. We have developed a probability-weighted DMC method by incorporating the weighted sampling algorithm of equilibrium molecular simulations.

Resat, Haluk; Wiley, H S.; Dixon, David A.

2001-07-01

162

A scatter correction method for T1-201 images: A Monte Carlo investigation

Results from the application of a modified dual photopeak window (DPW) scatter correction method to Monte Carlo simulated T1-201 emission images are presented. In the Monte Carlo investigation, individual simulations were performed for six radiation emissions of T1-201. For each emission, point sources of T1-201 were imaged at various locations i na water-filled elliptical tub phantom using three energy windows: two 12% windows abutted at 72 keV and a third 10 keV window placed to the right of the photopeak window (95.001 keV - 105.000 keV). The third window was used to estimate the spilldown contribution from the T1-201 gamma rays in each of the two photopeak windows. Using the corrected counts in these two windows, the DPW method was applied to each point source image to estimate the scatter distribution. For point source images in both homogeneous and non-homogeneous attenuating media, the application of this modified version of DPW resulted in an approximately six-fold reduction in the scatter fraction and an excellent agreement of the shape of the tails between the estimated scatter distribution and the Monte Carlo-simulated truth. This method was also applied to two views of an extended cardiac distribution within an anthropomorphic phantom, again, resulting in at least a six-fold improvement between the scatter estimate and the Monte Carlo-simulated true scatter.

Hademenos, G.J.; King, M.A. (Univ. of Massachusetts Medical Center, Worcester, MA (United States). Dept. of Nuclear Medicine); Ljungberg, M. (Lund Univ., (Sweden). Dept. of Radiation Physics); Zubal, G.; Harrell, C.R. (Yale Univ. School of Medicine, New Haven, CT (United States). Dept. of Diagnostic Radiology)

1993-08-01

163

A coupled two-dimensional drift-diffusion and Monte Carlo analysis is developed to study the hot-electron-caused gate leakage current in Si n-MOSFETs. The electron energy distribution in a device is evaluated directly from a Monte Carlo model at low and intermediate electron energies. In the region of high electron energy, where the distribution function cannot be resolved by the Monte Carlo method

Chimoon Huang; Tahui Wang; C. N. Chen; M. C. Chang; J. Fu

1992-01-01

164

A high-order photon Monte Carlo method for radiative transfer in direct numerical simulation

A high-order photon Monte Carlo method is developed to solve the radiative transfer equation. The statistical and discretization errors of the computed radiative heat flux and radiation source term are isolated and quantified. Up to sixth-order spatial accuracy is demonstrated for the radiative heat flux, and up to fourth-order accuracy for the radiation source term. This demonstrates the compatibility of the method with high-fidelity direct numerical simulation (DNS) for chemically reacting flows. The method is applied to address radiative heat transfer in a one-dimensional laminar premixed flame and a statistically one-dimensional turbulent premixed flame. Modifications of the flame structure with radiation are noted in both cases, and the effects of turbulence/radiation interactions on the local reaction zone structure are revealed for the turbulent flame. Computational issues in using a photon Monte Carlo method for DNS of turbulent reacting flows are discussed.

Wu, Y. [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, 130 Research Building E, University Park, PA 16802 (United States); Modest, M.F. [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, 130 Research Building E, University Park, PA 16802 (United States); Haworth, D.C. [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, 130 Research Building E, University Park, PA 16802 (United States)]. E-mail: dch12@psu.edu

2007-05-01

165

Linear multistep methods, particle filtering and sequential Monte Carlo

NASA Astrophysics Data System (ADS)

Numerical integration is the main bottleneck in particle filter methodologies for dynamic inverse problems to estimate model parameters, initial values, and non-observable components of an ordinary differential equation (ODE) system from partial, noisy observations, because proposals may result in stiff systems which first slow down or paralyze the time integration process, then end up being discarded. The immediate advantage of formulating the problem in a sequential manner is that the integration is carried out on shorter intervals, thus reducing the risk of long integration processes followed by rejections. We propose to solve the ODE systems within a particle filter framework with higher order numerical integrators which can handle stiffness and to base the choice of the variance of the innovation on estimates of the discretization errors. The application of linear multistep methods to particle filters gives a handle on the stability and accuracy of the propagation, and linking the innovation variance to the accuracy estimate helps keep the variance of the estimate as low as possible. The effectiveness of the methodology is demonstrated with a simple ODE system similar to those arising in biochemical applications.

Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki

2013-08-01

166

Ensemble Monte Carlo (EMC) simulation is performed to study the electron-electron (e-e) scattering of a two-dimensional electron gas. Two commonly utilized EMC methods, the first proposed by Lugli and Ferry [Physica B 117, 251 (1983)], the second by Brunetti and co-workers [Physica B 134, 369 (1985)], are used and discussed in detail. Both methods are found to lead to erroneous

Martin Moko; Antónia Mokovį

1991-01-01

167

A Monte Carlo investigation of the dual photopeak window scatter correction method [SPECT

Results from a Monte Carlo investigation of the dual photopeak window (DPW) scatter correction method are presented for point and extended sources of Tc-99m in both homogeneous and nonhomogeneous attenuating media. The DPW method uses the ratio of counts in two nonoverlapping energy windows within the photopeak region as input to a regression relation. A pixel-by-pixel estimate of the scatter

George J. Hademenos; Michael Ljungberg; Michael A. King; Stephen J. Glick

1993-01-01

168

A Monte Carlo investigation of the dual photopeak window scatter correction method [SPECT

Results from a Monte Carlo investigation of the dual photopeak window (DPW) scatter correction method are presented for point and extended sources of Tc-99m in both homogeneous and nonhomogeneous attenuating media. The DPW method uses the ratio of counts in two nonoverlapping energy windows within the photopeak region as input to a regression relation. A pixel-by-pixel estimate of the scatter

George J. Hademenos; Michael Ljungberg; Michael A. King; Stephen J. Glick

1991-01-01

169

Study of dipole moments of LiSr and KRb molecules by quantum Monte Carlo methods

NASA Astrophysics Data System (ADS)

Heteronuclear dimers are of significant interest to experiments seeking to exploit ultracold polar molecules in a number of novel ways including precision measurement, quantum computing, and quantum simulation. We calculate highly accurate Born-Oppenheimer total energies and electric dipole moments as a function of internuclear separation for two such dimers, LiSr and KRb. We apply fully-correlated, high-accuracy quantum Monte Carlo methods for evaluating these molecular properties in a many-body framework. We use small-core effective potentials combined with multi-reference Slater-Jastrow trial wave functions to provide accurate nodes for the fixed-node diffusion Monte Carlo method. For reference and comparison, we calculate the same properties with Hartree-Fock and with restricted Configuration Interaction methods, and carefully assess the impact of the recovered many-body correlations on the calculated quantities.

Guo, Shi; Bajdich, Michal; Mitas, Lubos; Reynolds, Peter J.

2013-07-01

170

Statistical error and optimal parameters of the test particle Monte Carlo method

NASA Astrophysics Data System (ADS)

The test particle Monte Carlo method for solving the linearized Boltzmann equation is considered. This method is used for simulation of a gas mixture flow when the concentration of one of the component is low. We study the errors of the method for three main macroparameters (density, velocity, and temperature). The new approach to construction of asymptotic confidence intervals for the estimates of velocity and temperature is proposed. The expressions for optimal selection of the number of grid nodes and the sample size which guarantee attaining a specified level of the error are proposed on the basis of the theory of functional Monte Carlo algorithms. The proposed approaches are examined on the examples of the classical problem of heat transfer between two parallel plates and the two-dimensional problem of a transversal supersonic flow of a rarefied binary gas mixture around a plate.

Plotnikov, M. Yu.; Shkarupa, E. V.

2012-11-01

171

Quantum Diffusion Monte Carlo Method for strong field time dependent problems

NASA Astrophysics Data System (ADS)

We formulate the Quantum Diffusion Quantum Monte Carlo (QDMC) method for the solution of the time-dependent Schr"odinger equation for atoms in strong laser fields. Unlike for the normal diffusion Monte Carlo the wave function is represented by walkers with two kinds or colors which solve two coupled and nonlinear diffusion equations. Those diffusion equations are coupled by the potentials similar to those introduced by Shay which must be added to Schr"odingers equation to obtain classical dynamics equivalent to the quantum mechanics [1]. The potentials are calculated semi-analytically similarly to smoothing methods of smooth particle electrodynamics (SPD) with Gaussian smoothing kernels. We apply this method to strong field two electron ionization of Helium. We calculate two electron double ionization rate in full six-dimensional configuration space quantum mechanically. Comparison with classical mechanics and the low dimensional grid models is also provided. 1cm [1] D. Shay, Phys. Rev A 13, 2261 (1976)

Kalinski, Matt

2006-05-01

172

Time-step limits for a Monte Carlo Compton-scattering method

Compton scattering is an important aspect of radiative transfer in high energy density applications. In this process, the frequency and direction of a photon are altered by colliding with a free electron. The change in frequency of a scattered photon results in an energy exchange between the photon and target electron and energy coupling between radiation and matter. Canfield, Howard, and Liang have presented a Monte Carlo method for simulating Compton scattering that models the photon-electron collision kinematics exactly. However, implementing their technique in multiphysics problems that include the effects of radiation-matter energy coupling typically requires evaluating the material temperature at its beginning-of-time-step value. This explicit evaluation can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and present time-step limits that avoid instabilities and nonphysical oscillations by considering a spatially independent, purely scattering radiative-transfer problem. Examining a simplified problem is justified because it isolates the effects of Compton scattering, and existing Monte Carlo techniques can robustly model other physics (such as absorption, emission, sources, and photon streaming). Our analysis begins by simplifying the equations that are solved via Monte Carlo within each time step using the Fokker-Planck approximation. Next, we linearize these approximate equations about an equilibrium solution such that the resulting linearized equations describe perturbations about this equilibrium. We then solve these linearized equations over a time step and determine the corresponding eigenvalues, quantities that can predict the behavior of solutions generated by a Monte Carlo simulation as a function of time-step size and other physical parameters. With these results, we develop our time-step limits. This approach is similar to our recent investigation of time discretizations for the Compton-scattering Fokker-Planck equation.

Densmore, Jeffery D [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Lowrie, Robert B [Los Alamos National Laboratory

2008-01-01

173

Comparison of Monte Carlo methods for criticality benchmarks: Pointwise compared to multigroup

Transport codes use multigroup cross sections where neutrons are divided into broad energy groups, and the monoenergetic equation is solved for each group with a group-averaged cross section. Monte Carlo codes differ in that they allow the use of the most basic pointwise cross-section data directly in a calculation. Most of the first Monte Carlo codes were not able to utilize this feature, however, because of the memory limitations of early computers and the lack of pointwise cross-section data. Consequently, codes written in 1970s, such as KENO-IV and MORSE-C, were adapted to use multigroup cross-section sets similar to those used in the S{sub n} transport codes. With advances in computer memory capacities and the availability of pointwise cross-section sets, new Monte Carlo codes employing pointwise cross-section libraries, such as the Los Alamos National Laboratory code MCNP and the Lawrence Livermore National Laboratory (LLNL) code COG were developed for criticality, as well as radiation transport calculations. To compare pointwise and multigroup Monte Carlo methods for criticality benchmark calculations, this paper presents and evaluated the results from the KENO-IV, MORSE-C, MCNP, and COG codes. The critical experiments selected for benchmarking include LLNL fast metal systems and low-enriched uranium moderated and reflected systems.

Choi, J.S.; Alesso, P.H.; Pearson, J.S. (Lawrence Livermore National Lab., CA (USA))

1989-01-01

174

Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe

NASA Astrophysics Data System (ADS)

This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic effects of the scattering reaction consistent with the subgroup method. In this study, we generalize the Discrete Angle Technique, already proposed for homogeneous, multigroup cross sections, to isotopic cross sections on the form of probability tables. In this technique, the angular density is discretized into probability tables. Similarly to the cross-section case, a moment approach is used to compute the probability tables for the scattering cosine. (4) The introduction of a leakage model based on the B1 fundamental mode approximation. Unlike deterministic lattice packages, most Monte Carlo-based lattice physics codes do not include leakage models. However the generation of homogenized and condensed group constants (cross sections, diffusion coefficients) require the critical flux. This project has involved the development of a program into the DRAGON framework, written in Fortran 2003 and wrapped with a driver in C, the GANLIB 5. Choosing Fortran 2003 has permitted the use of some modern features, such as the definition of objects and methods, data encapsulation and polymorphism. The validation of the proposed code has been performed by comparison with other numerical methods: (1) The continuous-energy Monte Carlo method of the SERPENT code. (2) The Collision Probability (CP) method and the discrete ordinates (SN) method of the DRAGON lattice code. (3) The multigroup Monte Carlo code MORET, coupled with the DRAGON code. Benchmarks used in this work are representative of some industrial configurations encountered in reactor and criticality-safety calculations: (1)Pressurized Water Reactors (PWR) cells and assemblies. (2) Canada-Deuterium Uranium Reactors (CANDU-6) clusters. (3) Critical experiments from the ICSBEP handbook (International Criticality Safety Benchmark Evaluation Program).

Martin, Nicolas

175

GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method

NASA Astrophysics Data System (ADS)

Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.

Wei, J.; Kruis, F. E.

2013-09-01

176

Monte Carlo Method for Predicting a Physically Based Drop Size Distribution Evolution of a Spray

NASA Astrophysics Data System (ADS)

We report in this paper a method for predicting the evolution of a physically based drop size distribution of a spray, by coupling the Maximum Entropy Formalism and the Monte Carlo scheme. Using the discrete or continuous population balance equation, a Mass Flow Algorithm is formulated taking into account interactions between droplets via coalescence. After deriving a kernel for coalescence, we solve the time dependent drop size distribution equation using a Monte Carlo method. We apply the method to the spray of a new print-head known as a Spray On Demand (SOD) device; the process exploits ultrasonic spray generation via a Faraday instability where the fluid/structure interaction causing the instability is described by a modified Hamilton's principle. This has led to a physically-based approach for predicting the initial drop size distribution within the framework of the Maximum Entropy Formalism (MEF): a three-parameter generalized Gamma distribution is chosen by using conservation of mass and energy. The calculation of the drop size distribution evolution by Monte Carlo method shows the effect of spray droplets coalescence both on the number-based or volume-based drop size distributions.

Tembely, Moussa; Lécot, Christian; Soucemarianadin, Arthur

2010-03-01

177

A diffusion Monte Carlo (DMC) method for the relativistic zeroth-order regular approximation (ZORA) is proposed. In this scheme, a novel approximate Green's function is derived for the spin-free ZORA Hamiltonian. Several numerical tests on atoms and small molecules showed that by combining with the relativistic cusp-correction scheme, the present approach can include both relativistic and electron-correlation effects simultaneously. The correlation energies recovered by the ZORA-DMC method are comparable with the nonrelativistic DMC results and superior to the coupled cluster singles and doubles with perturbative triples correction results when the correlation-consistent polarized valence triple-zeta Douglas-Kroll basis set is used. For the heavier CuH molecule, the ZORA-DMC estimation of its dissociation energy agrees with the experimental value within the error bar. PMID:23083144

Nakatsuka, Yutaka; Nakajima, Takahito

2012-10-21

178

NASA Astrophysics Data System (ADS)

A diffusion Monte Carlo (DMC) method for the relativistic zeroth-order regular approximation (ZORA) is proposed. In this scheme, a novel approximate Green's function is derived for the spin-free ZORA Hamiltonian. Several numerical tests on atoms and small molecules showed that by combining with the relativistic cusp-correction scheme, the present approach can include both relativistic and electron-correlation effects simultaneously. The correlation energies recovered by the ZORA-DMC method are comparable with the nonrelativistic DMC results and superior to the coupled cluster singles and doubles with perturbative triples correction results when the correlation-consistent polarized valence triple-zeta Douglas-Kroll basis set is used. For the heavier CuH molecule, the ZORA-DMC estimation of its dissociation energy agrees with the experimental value within the error bar.

Nakatsuka, Yutaka; Nakajima, Takahito

2012-10-01

179

NASA Astrophysics Data System (ADS)

Noninvasive diagnosis in medicine has shown considerable attention in recent years. Several methods are already available for imaging the biological tissue like X-ray computerized tomography, magentic resonance imaging and ultrasound imaging et c. But each of these methods has its own disadvantages. Optical tomography which uses NIR light is one of the emerging methods in teh field of medical imaging because it is non-invasive in nature. The only problem that occurs in using light for imaging the tissue is that it is highly scattered inside tissue, so the propagation of light in tissue is not confined to straight lines as is the case with X-ray tomography. Therefore the need arises to understand the behaviour of propagation of light in tissue. There are several methods for light interaction with tissue. Monte Carlo method is one of these methods which is a simple technique for simulation of light through tissue. The only problem faced with Monte Carlo simulation is its high computational time. Once the data is obtained using Monte Carlo simulation, it need to be inverted to obtain the reconstruction of tissue image. There are standard methods of reconstruction like algebraic reconstruction method, filtered backprojection method etc. But these methods can not be used as such in the case when light is used as probing radiations because it is highly scattered inside the tissue. The standard filtered backprojection method has been modified so that the zigzag path of photons is taken into consideration while back projecting the data. This is achieved by dividing the tissue domain in a square grid and storing the average path traversed in each grid element. It has been observed that the reconstruction obtained using this modification is much better than the result in case of standard filtered backprojection method.

Aggarwal, Ashwani; Vasu, Ram M.

2003-07-01

180

Order-N cluster Monte Carlo method for spin systems with long-range interactions

An efficient O(N) cluster Monte Carlo method for Ising models with long-range interactions is presented. Our novel algorithm does not introduce any cutoff for interaction range and thus it strictly fulfills the detailed balance. The realized stochastic dynamics is equivalent to that of the conventional Swendsen-Wang algorithm, which requires O(N{sup 2}) operations per Monte Carlo sweep if applied to long-range interacting models. In addition, it is shown that the total energy and the specific heat can also be measured in O(N) time. We demonstrate the efficiency of our algorithm over the conventional method and the O(NlogN) algorithm by Luijten and Bloete. We also apply our algorithm to the classical and quantum Ising chains with inverse-square ferromagnetic interactions, and confirm in a high accuracy that a Kosterlitz-Thouless phase transition, associated with a universal jump in the magnetization, occurs in both cases.

Fukui, Kouki [Department of Applied Physics, University of Tokyo, 7-3-1 Hongo, Tokyo 113-8656 (Japan); Todo, Synge [Department of Applied Physics, University of Tokyo, 7-3-1 Hongo, Tokyo 113-8656 (Japan); CREST, Japan Science and Technology Agency, Kawaguchi 332-0012 (Japan)], E-mail: wistaria@ap.t.u-tokyo.ac.jp

2009-04-20

181

Mesh-based Monte Carlo method in time-domain widefield fluorescence molecular tomography.

We evaluated the potential of mesh-based Monte Carlo (MC) method for widefield time-gated fluorescence molecular tomography, aiming to improve accuracy in both shape discretization and photon transport modeling in preclinical settings. An optimized software platform was developed utilizing multithreading and distributed parallel computing to achieve efficient calculation. We validated the proposed algorithm and software by both simulations and in vivo studies. The results establish that the optimized mesh-based Monte Carlo (mMC) method is a computationally efficient solution for optical tomography studies in terms of both calculation time and memory utilization. The open source code, as part of a new release of mMC, is publicly available at http://mcx.sourceforge.net/mmc/. PMID:23224008

Chen, Jin; Fang, Qianqian; Intes, Xavier

2012-10-01

182

Mesh-based Monte Carlo method in time-domain widefield fluorescence molecular tomography

Abstract. We evaluated the potential of mesh-based Monte Carlo (MC) method for widefield time-gated fluorescence molecular tomography, aiming to improve accuracy in both shape discretization and photon transport modeling in preclinical settings. An optimized software platform was developed utilizing multithreading and distributed parallel computing to achieve efficient calculation. We validated the proposed algorithm and software by both simulations and in vivo studies. The results establish that the optimized mesh-based Monte Carlo (mMC) method is a computationally efficient solution for optical tomography studies in terms of both calculation time and memory utilization. The open source code, as part of a new release of mMC, is publicly available at http://mcx.sourceforge.net/mmc/.

Chen, Jin; Fang, Qianqian; Intes, Xavier

2012-01-01

183

Implicit Monte Carlo methods and non-equilibrium Marshak wave radiative transport

Two enhancements to the Fleck implicit Monte Carlo method for radiative transport are described, for use in transparent and opaque media respectively. The first introduces a spectral mean cross section, which applies to pseudoscattering in transparent regions with a high frequency incident spectrum. The second provides a simple Monte Carlo random walk method for opaque regions, without the need for a supplementary diffusion equation formulation. A time-dependent transport Marshak wave problem of radiative transfer, in which a non-equilibrium condition exists between the radiation and material energy fields, is then solved. These results are compared to published benchmark solutions and to new discrete ordinate S-N results, for both spatially integrated radiation-material energies versus time and to new spatially dependent temperature profiles. Multigroup opacities, which are independent of both temperature and frequency, are used in addition to a material specific heat which is proportional to the cube of the temperature. 7 refs., 4 figs.

Lynch, J.E.

1985-01-01

184

Mesh-based Monte Carlo method in time-domain widefield fluorescence molecular tomography

NASA Astrophysics Data System (ADS)

We evaluated the potential of mesh-based Monte Carlo (MC) method for widefield time-gated fluorescence molecular tomography, aiming to improve accuracy in both shape discretization and photon transport modeling in preclinical settings. An optimized software platform was developed utilizing multithreading and distributed parallel computing to achieve efficient calculation. We validated the proposed algorithm and software by both simulations and in vivo studies. The results establish that the optimized mesh-based Monte Carlo (mMC) method is a computationally efficient solution for optical tomography studies in terms of both calculation time and memory utilization. The open source code, as part of a new release of mMC, is publicly available at

Chen, Jin; Fang, Qianqian; Intes, Xavier

2012-10-01

185

A Monte-Carlo method for simulating linear polarization variations in clumpy massive-star winds

NASA Astrophysics Data System (ADS)

I present a Monte-Carlo method for simulating the time-varying linear continuum polarization arising from electron scattering in clumpy winds, with a particular focus on massive stars. The method uses a novel semi-analytic approach to inverting the optical depth equation. Comparison against a single-scattering method reveals the latter over-predicts the mean polarization of optically thick winds, even when individual clumps are optically thin; therefore, single-scattering methods in general should be avoided when interpreting polarimetry of such winds.

Townsend, Rich

2012-05-01

186

Monte Carlo methods for the nuclear shell model and their applications

NASA Astrophysics Data System (ADS)

Shell model quantum Monte Carlo methods are applied to calculate a variety of nuclear properties, in particular for nuclei in the iron region. The methods are based on a decomposition of the many-body propagator as a superposition of one-body propagators of non-interacting nucleons moving in fluctuating auxiliary fields, and can be applied in very large model spaces. Various projection techniques are developed to study the dependence of nuclear properties on good quantum numbers such as parity and spin. The particle-number reprojection method enables us to calculate thermal observables of several nuclei using the Monte Carlo sampling for one nucleus only. Nuclear level densities calculated by this method agree remarkably well with experimental data without any adjustable parameters. Parity-projected Monte Carlo calculations indicate a significant parity dependence of level densities even at the neutron-resonance energies. A simple quasi-particle model is developed to explain this parity dependence. Spin distributions of level densities are studied using the spin projection technique. The Monte Carlo results are compared with the spin-cutoff model and used to extract an energy-dependent moment of inertia. The strong suppression in the moment of inertia of even-even nuclei correlates with pairing effects and is explained by a cranking model. Thermal signatures of the pairing transition are found in the heat capacity of even-even neutron-rich iron isotopes. New commutator techniques are developed to calculate low-order moments of strength functions, and are applied in the study of electromagnetic strength functions in iron-region nuclei.

Liu, Shichang

2001-09-01

187

Variational method for estimating the rate of convergence of Markov-chain Monte Carlo algorithms

We demonstrate the use of a variational method to determine a quantitative\\u000alower bound on the rate of convergence of Markov Chain Monte Carlo (MCMC)\\u000aalgorithms as a function of the target density and proposal density. The bound\\u000arelies on approximating the second largest eigenvalue in the spectrum of the\\u000aMCMC operator using a variational principle and the approach is

Fergal P. Casey; Ryan N. Gutenkunst; Christopher R. Myers; James P. Sethna

2008-01-01

188

Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods

Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and the triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for 99m Te and 201Tl for numerical chest phantoms. Data were reconstructed with an ordered-subset ML-EM algorithm including attenuation

Y. Narita; H. Iida; S. Eberl; T. Nakamura

1996-01-01

189

Markov Chain Monte Carlo MIMO Detection Methods for High Signal-to-Noise Ratio Regimes

Markov Chain Monte Carlo methods have recently been applied as front-end detectors in multiple- input multiple-output (MIMO) communication systems. Moreover, the near capacity behavior of such detectors in low signal-to-noise ratio (SNR) regimes have been demon- strated through computer simulations. However, it has also been found that the MCMC MIMO detectors de- grade in high SNR regimes. This paper investigates

Xuehong Mao; Peiman Amini; Behrouz Farhang-boroujeny

2007-01-01

190

After successful implementation in commercial treatment planning systems (TPS) for high energy electron beams, Monte Carlo\\u000a dose calculation algorithms are becoming also commercially available for high energy photon beams. On the other hand, advanced\\u000a kernel based methods are in clinical use for many years. The aim of this study was to compare the accuracy of both types of\\u000a dose calculation

I. Fotina; B. Kroupa; D. Georg

191

X-ray buildup factors of lead in broad beam geometry for energies from 15 to 150 keV are determined using the general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C). The obtained buildup factors data are fitted to a modified three parameter Archer et al. model for ease in calculating the broad beam transmission with computer at any tube potentials/filters combinations in diagnostic energies range. An example for their use to compute the broad beam transmission at 70, 100, 120, and 140 kVp is given. The calculated broad beam transmission is compared to data derived from literature, presenting good agreement. Therefore, the combination of the buildup factors data as determined and a mathematical model to generate x-ray spectra provide a computationally based solution to broad beam transmission for lead barriers in shielding x-ray facilities.

Kharrati, Hedi; Agrebi, Amel; Karaoui, Mohamed-Karim [Ecole Superieure des Sciences et Techniques de la Sante de Monastir, Avenue Avicenne 5000 Monastir (Tunisia); Faculte des Sciences de Monastir (Tunisia)

2007-04-15

192

A massively parallel implementation of the worldline quantum Monte Carlo method

The worldline quantum Monte Carlo method is a computational technique for studying the properties of many-electron and quantum-spin systems. In this paper, our efforts in developing an efficient implementation of this method for the massively parallel Connection Machine CM-2 are described. Why one must look beyond the obvious parallelism in the method in order to reduce interprocessor communication and increase processor utilization, and how these goals may be achieved using a plaquette-based data representation are discussed. Performance statistics for the implementation and sample calculations for the spinless fermion model are also presented.

Somsky, W.R.; Gubernatis, J.E. (Advanced Computing Laboratory, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States))

1992-03-01

193

Analysis of single Monte Carlo methods for prediction of reflectance from turbid media.

Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904

Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan

2011-09-26

194

A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure. PMID:17510203

Takahashi, F; Endo, A

2007-05-17

195

In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided.

Hey, Jody; Nielsen, Rasmus

2007-01-01

196

A general method for spatially coarse-graining Metropolis Monte Carlo simulations onto a lattice

NASA Astrophysics Data System (ADS)

A recently introduced method for coarse-graining standard continuous Metropolis Monte Carlo simulations of atomic or molecular fluids onto a rigid lattice of variable scale [X. Liu, W. D. Seider, and T. Sinno, Phys. Rev. E 86, 026708 (2012)] is further analyzed and extended. The coarse-grained Metropolis Monte Carlo technique is demonstrated to be highly consistent with the underlying full-resolution problem using a series of detailed comparisons, including vapor-liquid equilibrium phase envelopes and spatial density distributions for the Lennard-Jones argon and simple point charge water models. In addition, the principal computational bottleneck associated with computing a coarse-grained interaction function for evolving particle positions on the discretized domain is addressed by the introduction of new closure approximations. In particular, it is shown that the coarse-grained potential, which is generally a function of temperature and coarse-graining level, can be computed at multiple temperatures and scales using a single set of free energy calculations. The computational performance of the method relative to standard Monte Carlo simulation is also discussed.

Liu, Xiao; Seider, Warren D.; Sinno, Talid

2013-03-01

197

A general method for spatially coarse-graining Metropolis Monte Carlo simulations onto a lattice.

A recently introduced method for coarse-graining standard continuous Metropolis Monte Carlo simulations of atomic or molecular fluids onto a rigid lattice of variable scale [X. Liu, W. D. Seider, and T. Sinno, Phys. Rev. E 86, 026708 (2012)] is further analyzed and extended. The coarse-grained Metropolis Monte Carlo technique is demonstrated to be highly consistent with the underlying full-resolution problem using a series of detailed comparisons, including vapor-liquid equilibrium phase envelopes and spatial density distributions for the Lennard-Jones argon and simple point charge water models. In addition, the principal computational bottleneck associated with computing a coarse-grained interaction function for evolving particle positions on the discretized domain is addressed by the introduction of new closure approximations. In particular, it is shown that the coarse-grained potential, which is generally a function of temperature and coarse-graining level, can be computed at multiple temperatures and scales using a single set of free energy calculations. The computational performance of the method relative to standard Monte Carlo simulation is also discussed. PMID:23534624

Liu, Xiao; Seider, Warren D; Sinno, Talid

2013-03-21

198

Monte Carlo methods of coupled neutron\\/photon transport are being used in the design of filtered beams for Neutron Capture Therapy (NCT). This method of beam analysis provides segregation of each individual dose component, and thereby facilitates beam optimization. The Monte Carlo method is discussed in some detail in relation to NCT epithermal beam design. Ideal neutron beams (i.e., plane-wave monoenergetic

S. D. Clement; J. R. Choi; R. G. Zamenhof; J. C. Yanch; O. K. Harling

1990-01-01

199

A List Referring Monte-Carlo Method for Lattice Glass Models

NASA Astrophysics Data System (ADS)

We present an efficient Monte-Carlo method for lattice glass models which are characterized by hard constraint conditions. The basic idea of the method is similar to that of the N-fold way method. By using a list of sites into which we can insert a particle, we avoid trying a useless transition which is forbidden by the constraint conditions. We applied the present method to a lattice glass model proposed by Biroli and Mézard. We first evaluated the efficiency of the method through measurements of the autocorrelation function of particle configurations. As a result, we found that the efficiency is much higher than that of the standard Monte-Carlo method. We also compared the efficiency of the present method with that of the N-fold way method in detail. We next examined how the efficiency of extended ensemble methods such as the replica exchange method and the Wang--Landau method is influenced by the choice of the local update method. The results show that the efficiency is considerably improved by the use of efficient local update methods. For example, when the number of sites Nsite is 1024, the ergodic time ?E of the replica exchange method in the grand-canonical ensemble, which is the average round-trip time of a replica in chemical-potential space, with the present local update method is more than 102 times shorter than that with the standard local update method. This result shows that the efficient local update method is quite important to make extended ensemble methods more effective.

Sasaki, Munetaka; Hukushima, Koji

2013-09-01

200

Quantum Monte Carlo Method using Phase-Free Random Walks with Slater Determinants

NASA Astrophysics Data System (ADS)

We develop a quantum Monte Carlo method for many fermions using random walks in the space of Slater determinants. An approximate approach is formulated with a trial wave function |?T> to control the phase problem. Using a plane-wave basis and nonlocal pseudopotentials, we apply the method to Be, Si, and P atoms and dimers, and to bulk Si supercells. Single-determinant wave functions from density functional theory calculations were used as |?T> with no additional optimization. The calculated binding energies of dimers and cohesive energy of bulk Si are in excellent agreement with experiments and are comparable to the best existing theoretical results.

Zhang, Shiwei; Krakauer, Henry

2003-04-01

201

NASA Astrophysics Data System (ADS)

A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented.

Sharma, Anupam; Long, Lyle N.

2004-10-01

202

Dynamical properties from quantum Monte Carlo by the Maximum Entropy Method

An outstanding problem in the simulation of condensed matter phenomena is how to obtain dynamical information. We consider the numerical analytic continuation of imaginary time Quantum Monte Carlo data to obtain real frequency spectral functions. We suggest an image reconstruction approach which has been widely applied to data analysis in experimental research, the Maximum Entropy Method (MaxEnt). We report encouraging preliminary results for the Fano-Anderson model of an impurity state in a continuum. The incorporation of additional prior information, such as sum rules and asymptotic behavior, can be expected to significantly improve results. We also compare MaxEnt to alternative methods. 17 refs., 4 figs.

Silver, R.N.; Sivia, D.S.; Gubernatis, J.E.

1989-01-01

203

Extrapolation method in the Monte Carlo Shell Model and its applications

We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking {sup 56}Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as {sup 72}Ge with f5pg9-shell. The structure of {sup 72}Se is also studied including the discussion of the shape-coexistence phenomenon.

Shimizu, Noritaka; Abe, Takashi [Department of Physics, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); Utsuno, Yutaka [Advanced Science Research Center, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1195 (Japan); Mizusaki, Takahiro [Institute of Natural Sciences, Senshu University, Tokyo, 101-8425 (Japan); Otsuka, Takaharu [Department of Physics, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); Center for Nuclear Study, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, Michigan (United States); Honma, Michio [Center for Mathematical Sciences, Aizu University, Aizu-Wakamatsu, Fukushima 965-8580 (Japan)

2011-05-06

204

This paper provides a review of the hybrid (Monte Carlo\\/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux,

John C Wagner; Douglas E. Peplow; Scott W Mosher; Thomas M Evans

2010-01-01

205

This paper provides a review of the hybrid (Monte Carlo\\/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux,

John C Wagner; Douglas E. Peplow; Scott W Mosher; Thomas M Evans

2011-01-01

206

Phase-free quantum Monte Carlo method: random walks using general basis sets

NASA Astrophysics Data System (ADS)

Fermion quantum Monte Carlo (QMC) methods that work in a general basis space may be more effective for some problems than the traditional diffusion QMC method. Projection of the ground state energy is achieved by random walks in the space of Slater determinants whose one-particle orbitals are expressed in terms of the chosen basis set. A complication is that the determinants will in general acquire complex phases. This a consequence of ground state Monte Carlo projection using the Hubbard-Stratonovich transformation of the two-body interaction. To control the resulting ``sign'' decay, we describe a method we have recently introduced for the propagation of phaseless determinants. The approximation relies on importance sampling with a trial wave function. The approximation has features in common with diffusion MC fixed node and lattice-model constrained path methods. Using a planewave basis and non-local pseudopotentials, we apply the method to Si atom, dimer, and 2, 16, 54 atom (216 electrons) bulk supercells. Single Slater determinant wave functions from density functional theory calculations were used as |?T> with no additional optimization. The calculated binding energy of Si2 and cohesive energy of bulk Si are in excellent agreement with experiments and are comparable to the best existing theoretical results.

Krakauer, Henry; Zhang, Shiwei

2003-08-01

207

NASA Astrophysics Data System (ADS)

Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

Parsons, Tom

2008-03-01

208

The quantum Monte Carlo methodelectron correlation from random numbers (abstract only)

NASA Astrophysics Data System (ADS)

The fixed-node diffusion quantum Monte Carlo (DMC) method is the most accurate method known for calculating the energies of large many-particle quantum systems. The key element of the method is the development of accurate trial many-body wavefunctions which control the statistical efficiency of the calculations and the accuracy obtained. Accurate wavefunctions can be obtained by building correlation effects on top of mean field descriptions such as density functional theory. The wavefunctions can be improved by introducing multi-determinants, pairing functions, and backflow transformations. The calculations are expensive, but the method scales well with system size and calculations on 1000 particles are possible. Some recent applications of the DMC method to atoms, molecules and solids will be presented.

Needs, Richard

2008-02-01

209

Simulation on doping dependent phase transition in MnSi by Monte Carlo method

NASA Astrophysics Data System (ADS)

Recently, the skyrmion lattice has been found in the A phase of the itinerant helimagnet MnSi by small angle neutron scattering and the magnetic analogue of blue phases has been reported to explain a number of puzzling features of MnSi. Here we use a different approach based on Monte Carlo methods, showing the thermal behavior around transition temperatures in doped systems under the simplest Dzyaloshinsky-Moriya nearest-neighbor interactions. Interestingly, the transition temperature decreases with increasing doping concentrations, which is consistent with the experimental observations. We also show how the topological order parameter changes with temperature and its relation with the specific heat and thermal fluctuations.

Yang, Jhih-An; Reznik, Dmitry

2013-03-01

210

NASA Astrophysics Data System (ADS)

We simulate resistance distributions of multilevel oxide bipolar resistive random access memories (ReRAMs) through a physical model with Monte Carlo method. The model is used to explain frequently noticed proportionality relationship between distributions of resistances and multi-levels program voltages. By comparing with the experimental results obtained with TaOx/Ta2O5 bipolar ReRAM, the model is verified to have a good consistency with experiments not only qualitatively but also quantitatively. We demonstrate that the resistance distributions responses are basically determined by the ion migration barrier in the resistance varying thin oxide layer which means that it is a nearly intrinsic material property.

Hur, Ji-Hyun; Ryul Lee, Seung; Lee, Myoung-Jae; Cho, Seong-Ho; Park, Youngsoo

2013-09-01

211

A Monte Carlo method for chemical potential determination in single and multiple occupancy crystals

NASA Astrophysics Data System (ADS)

We describe a Monte Carlo scheme which, in a single simulation, yields a measurement of the chemical potential of a crystalline solid. Within the isobaric ensemble, this immediately provides an estimate of the system free energy, with statistical uncertainties that are determined precisely and transparently. An extension to multiple occupancy (cluster) solids permits the direct determination of the cluster chemical potential and hence the equilibrium conditions. We apply the method to a model exhibiting cluster crystalline phases, where we find evidence for an infinite cascade of critical points terminating coexistence between crystals of differing site occupancies.

Wilding, Nigel B.; Sollich, Peter

2013-01-01

212

The applicability of certain Monte Carlo methods to the analysis of interacting polymers

The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.

Krapp, D.M. Jr. [Univ. of California, Berkeley, CA (United States)

1998-05-01

213

Dynamic Load Balancing for Petascale Quantum Monte Carlo Applications: The Alias Method

Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.

Sudheer, C. D. [Sri Sathya Sai University; Krishnan, S. [Florida State University; Srinivasan, Ashok [ORNL; Kent, Paul R [ORNL

2013-01-01

214

Efficient implementation of a Monte Carlo method for uncertainty evaluation in dynamic measurements

NASA Astrophysics Data System (ADS)

Measurement of quantities having time-dependent values such as force, acceleration or pressure is a topic of growing importance in metrology. The application of the Guide to the Expression of Uncertainty in Measurement (GUM) and its Supplements to the evaluation of uncertainty for such quantities is challenging. We address the efficient implementation of the Monte Carlo method described in GUM Supplements 1 and 2 for this task. The starting point is a time-domain observation equation. The steps of deriving a corresponding measurement model, the assignment of probability distributions to the input quantities in the model, and the propagation of the distributions through the model are all considered. A direct implementation of a Monte Carlo method can be intractable on many computers since the storage requirement of the method can be large compared with the available computer memory. Two memory-efficient alternatives to the direct implementation are proposed. One approach is based on applying updating formulae for calculating means, variances and point-wise histograms. The second approach is based on evaluating the measurement model sequentially in time. A simulated example is used to compare the performance of the direct and alternative procedures.

Eichstädt, S.; Link, A.; Harris, P.; Elster, C.

2012-06-01

215

Dosimetry of Beta-Emitting Radionuclides at the Tissular Level Using Monte Carlo Methods

Standard macroscopic methods used to assess the dose in nuclear medicine are limited to cases of homogeneous radionuclide distributions and provide dose estimations at the organ level. In a few applications, like radioimmunotherapy, the mean dose to an organ is not suitable to explain clinical observations, and knowledge of the dose at the tissular level is mandatory. Therefore, one must determine how particles lose their energy and what is the best way to represent tissues. The Monte Carlo method is appropriate to solve the problem of particle transport, but the question of the geometric representation of biology remains. In this paper, we describe a software (CLUSTER3D) that is able to build randomly biologically representative sphere cluster geometries using a statistical description of tissues. These geometries are then used by our Monte Carlo code called DOSE3D to perform particle transport. First results obtained on thyroid models highlight the need of cellular and tissular data to take into account actual radionuclide distributions in tissues. The flexibility and reliability of the method makes it a useful tool to study the energy deposition at various cellular and tissular levels in any configuration.

Coulot, J.; Lavielle, F.; Faggiano, A.; Bellon, N.; Aubert, B.; Schlumberger, M.; Ricard, M. [Institut Gustave-Roussy (France)

2005-02-15

216

Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method

NASA Astrophysics Data System (ADS)

Diffusion Monte Carlo is a highly accurate Quantum Monte Carlo method for electronic structure calculations of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency on parallel machines. This step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm: a simple renumbering of the MPI ranks based on proximity and a space filling curve significantly improves the MPI Allgather performance. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL show up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that require load balancing.

Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.

2013-02-01

217

The narrow resonance (NR) approximation has, in the past, been applied to regular lattices with fairly simple unit cells. Attempts to use the NR approximation to deal with fine details of the lattice structure, or with complicated lattice cells, have generally been based on assumptions and approximations that are rather difficult to evaluate. A benchmark method is developed in which slowing down is still treated in the NR approximation, but spatial neutron transport is handled by Monte Carlo. This benchmark method is used to evaluate older methods for analyzing the double-heterogeneity effect in fast reactors, and for computing resonance integrals in the PROTEUS lattices. New methods for treating the PROTEUS lattices are proposed.

Chen, I.J.; Gelbard, E.M.

1988-07-01

218

Unpolarized Fermi gas in squeezed anisotropic harmonic trap by Quantum Monte Carlo methods

NASA Astrophysics Data System (ADS)

Using diffusion Monte Carlo (DMC) method, we calculate the ground state properties of unpolarized Fermi gas at unitarity regime in both isotropic and anisotropic harmonic potentials. We study the effects of anisotropy by increasing the frequency in z direction ?z of the harmonic potential while keeping the frequency in x and y direction unchanged. The true unitarity regime is obtained by extrapolating the interaction range to zero and the calculations are done using the fixed-node diffusion Monte Carlo method. The trial function is of the BCS form with the pairing function expanded in appropriate linear combinations of the anisotropic oscillator eigenstates. We evaluate the binding energies for varying particle numbers and we estimate its behavior in the limit of large number of atoms. We estimate dependence of projected density profile and momentum distribution on the X-Y plane with respect to ?z. Our results can be readily used as a benchmark for the cold atom experiment with similar experimental set-up. Supported by ARO and NSF.

Li, Xin; Mitas, Lubos

2012-02-01

219

Uncertainty analysis using Monte Carlo method in the measurement of phase by ESPI

A method for simultaneously measuring whole field in-plane displacements by using optical fiber and based on the dual-beam illumination principle electronic speckle pattern interferometry (ESPI) is presented in this paper. A set of single mode optical fibers and beamsplitter are employed to split the laser beam into four beams of equal intensity.One pair of fibers is utilized to illuminate the sample in the horizontal plane so it is sensitive only to horizontal in-plane displacement. Another pair of optical fibers is set to be sensitive only to vertical in-plane displacement. Each pair of optical fibers differs in longitude to avoid unwanted interference. By means of a Fourier-transform method of fringe-pattern analysis (Takeda method), we can obtain the quantitative data of whole field displacements. We found the uncertainty associated with the phases by mean of Monte Carlo-based technique.

Anguiano Morales, Marcelino; Martinez, Amalia; Rayas, J. A. [Centro de Investigaciones en Optica A. C. Apartado Postal 1-948, 37000 Leon (Mexico); Cordero, Raul R. [Leibniz Universitaet Hannover, Herrenhaeuser Str. 2, D-30419 Hannover (Germany)

2008-04-15

220

NASA Astrophysics Data System (ADS)

We investigate Monte Carlo simulation strategies for determining the effective (``depletion'') potential between a pair of hard spheres immersed in a dense sea of much smaller hard spheres. Two routes to the depletion potential are considered. The first is based on estimates of the insertion probability of one big sphere in the presence of the other; we describe and compare three such methods. The second route exploits collective (cluster) updating to sample the depletion potential as a function of the separation of the big particles; we describe two such methods. For both routes, we find that the sampling efficiency at high densities of small particles can be enhanced considerably by exploiting ``geometrical shortcuts'' that focus the computational effort on a subset of small particles. All the methods we describe are readily extendable to particles interacting via arbitrary potentials.

Ashton, D. J.; Sįnchez-Gil, V.; Wilding, N. B.

2013-10-01

221

Efficient Markov chain Monte Carlo methods for decoding neural spike trains.

Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is log concave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly nongaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the "hit-and-run" algorithm performed better than other MCMC methods. Using these algorithms, we show that for this latter class of priors, the posterior mean estimate can have a considerably lower average error than MAP, whereas for gaussian priors, the two estimators have roughly equal efficiency. We also address the application of MCMC methods for extracting nonmarginal properties of the posterior distribution. For example, by using MCMC to calculate the mutual information between the stimulus and response, we verify the validity of a computationally efficient Laplace approximation to this quantity for gaussian priors in a wide range of model parameters; this makes direct model-based computation of the mutual information tractable even in the case of large observed neural populations, where methods based on binning the spike train fail. Finally, we consider the effect of uncertainty in the GLM parameters on the posterior estimators. PMID:20964539

Ahmadian, Yashar; Pillow, Jonathan W; Paninski, Liam

2010-10-21

222

Monte Carlo methods for the in vivo analysis of cisplatin using X-ray fluorescence.

A Monte Carlo method has been used to model the measurement of cisplatin uptake with in vivo X-ray fluorescence. A user-code has been written for the EGS4 Monte Carlo system that incorporates linear polarisation and multiple element fluorescence extensions. The yield of fluorescent photons to the mainly Compton scattered background is computed for our detector arrangement. The detector consists of a mutually orthogonal arrangement of X-ray tube, aluminium polariser and high purity germanium scintillation detector. The influence of tube voltage on the minimum detectable concentration is modelled for 100 through 150 kVp X-radiation. The code is able to predict absorbed dose to the patient which will influence the optimal choice of tube voltage. The influence of alterations to collimator design and scatterer construction can also be examined. A minimum detectable concentration of 50 ppm is determined from measurements with a 115 kVp X-ray source and a 615 ppm cisplatin sample in a water phantom. PMID:9569576

Hugtenburg, R P; Turner, J R; Mannering, D M; Robinson, B A

223

Applications of the Monte Carlo method in nuclear physics using the GEANT4 toolkit

The capabilities of the personal computers allow the application of Monte Carlo methods to simulate very complex problems that involve the transport of particles through matter. Among the several codes commonly employed in nuclear physics problems, the GEANT4 has received great attention in the last years, mainly due to its flexibility and possibility to be improved by the users. Differently from other Monte Carlo codes, GEANT4 is a toolkit written in object oriented language (C++) that includes the mathematical engine of several physical processes, which are suitable to be employed in the transport of practically all types of particles and heavy ions. GEANT4 has also several tools to define materials, geometry, sources of radiation, beams of particles, electromagnetic fields, and graphical visualization of the experimental setup. After a brief description of the GEANT4 toolkit, this presentation reports investigations carried out by our group that involve simulations in the areas of dosimetry, nuclear instrumentation and medical physics. The physical processes available for photons, electrons, positrons and heavy ions were used in these simulations.

Moralles, Mauricio; Guimaraes, Carla C.; Menezes, Mario O. [Instituto de Pesquisas Energeticas e Nucleares, CP 11049, CEP 05422-970, Sao Paulo, SP (Brazil); Bonifacio, Daniel A. B. [Instituto de Fisica da Universidade de Sao Paulo, CP 66318, CEP 05315-970, Sao Paulo, SP (Brazil); Instituto de Pesquisas Energeticas e Nucleares, CP 11049, CEP 05422-970, Sao Paulo, SP (Brazil); Okuno, Emico; Guimaraes, Valdir [Instituto de Fisica da Universidade de Sao Paulo, CP 66318, CEP 05315-970, Sao Paulo, SP (Brazil); Murata, Helio M.; Bottaro, Marcio [Instituto de Eletrotecnica e Energia da Universidade de Sao Paulo, Av. Prof. Luciano Gualberto, 1289, CEP 05508-010, Sao Paulo, SP (Brazil)

2009-06-03

224

Applications of the Monte Carlo method in nuclear physics using the GEANT4 toolkit

NASA Astrophysics Data System (ADS)

The capabilities of the personal computers allow the application of Monte Carlo methods to simulate very complex problems that involve the transport of particles through matter. Among the several codes commonly employed in nuclear physics problems, the GEANT4 has received great attention in the last years, mainly due to its flexibility and possibility to be improved by the users. Differently from other Monte Carlo codes, GEANT4 is a toolkit written in object oriented language (C++) that includes the mathematical engine of several physical processes, which are suitable to be employed in the transport of practically all types of particles and heavy ions. GEANT4 has also several tools to define materials, geometry, sources of radiation, beams of particles, electromagnetic fields, and graphical visualization of the experimental setup. After a brief description of the GEANT4 toolkit, this presentation reports investigations carried out by our group that involve simulations in the areas of dosimetry, nuclear instrumentation and medical physics. The physical processes available for photons, electrons, positrons and heavy ions were used in these simulations.

Moralles, Maurķcio; Guimara~Es, Carla C.; Bonifįcio, Daniel A. B.; Okuno, Emico; Murata, Hélio M.; Bottaro, Mįrcio; Menezes, Mįrio O.; Guimara~Es, Valdir

2009-06-01

225

Coupled proton/neutron transport calculations using the S sub N and Monte Carlo methods

Coupled charged/neutral article transport calculations are most often carried out using the Monte Carol technique. For example, the ITS, EGS, and MCNP (Version 4) codes are used extensively for electron/photon transport calculations while HETC models the transport of protons, neutrons and heavy ions. In recent years there has been considerable progress in deterministic models of electron transport, and many of these models are applicable to protons. However, even with these new models (and the well established models for neutron transport) deterministic coupled neutron/proton transport calculations have not been feasible for most problems of interest, due to a lack of coupled multigroup neutron/proton cross section sets. Such cross sections sets are now being developed at Los Alamos. Using these cross sections we have carried out coupled proton/neutron transport calculations using both the S{sub N} and Monte Carlo methods. The S{sub N} calculations used a code called SMARTEPANTS (simulating many accumulative Rutherford trajectories, electron, proton and neutral transport slover) while the Monte Carlo calculations are done with the multigroup option of the MCNP code. Both SMARTEPANTS and MCNP require standard multigroup cross section libraries. HETC on the other hand, avoids the need for precalculated nuclear cross sections by modeling individual nucleon collisions as the transporting neutrons and protons interact with nuclei. 21 refs., 1 fig.

Filippone, W.L. (Arizona Univ., Tucson, AZ (USA). Dept. of Nuclear and Energy Engineering); Little, R.C.; Morel, J.E.; MacFarlane, R.E.; Young, P.G. (Los Alamos National Lab., NM (USA))

1991-01-01

226

The recently developed XVMC code, a fast Monte Carlo (MC) algorithm to calculate the dose of photon and electron beams in treatment planning, was compared to EGSnrc, an enhanced version of the well-known EGS4 system. Because of the numerous and accurate verification measurements, this system can be considered as golden standard. The comparison was performed using phantoms consisting of water, lung tissue and bone. Dose profile and difference distributions showed good agreement within the accuracy requirements. Because deviations between the results of two MC algorithms are caused by systematic errors and statistical fluctuations, a separation method was used to quantify the systematic discrepancies. Using this method, it could be shown that there was good agreement between the three dimensional dose distributions, calculated with XVMC and EGSnrc, if maximum systematic deviation of 2% are accepted. PMID:11668812

Fippel, M; Nüsslin, F

2001-01-01

227

NASA Astrophysics Data System (ADS)

We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.

Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa

228

Application of the full-spectrum k-distribution method to photon Monte Carlo solvers

NASA Astrophysics Data System (ADS)

Accurate prediction of radiative heat transfer is key in most high temperature applications, such as combustion devices and fires. Among the various solution methods for the radiative transfer equation (RTE), the photon Monte Carlo (PMC) method is potentially the most accurate and the most versatile. The implementation of a PMC method in multidimensional inhomogeneous problems, however, can be limited by its demand for large computer storage space and its CPU time consumption. This is particularly true if the spectral absorption coefficient is to be accurately represented, due to its irregular behavior. On the other hand, the recently developed full-spectrum k-distribution (FSK) method reorders the irregular absorption coefficient into smooth k-distributions and, therefore, provides an efficient and accurate scheme for the spectral integration of radiative quantities of interest. In this paper the accuracy of the PMC method in solving the RTE and the efficiency and storage advantage provided by the FSK method are combined. The advantages of the proposed PMC/FSK method is described in detail. The accuracy and the efficiency of the method are demonstrated by sample calculations that consider inhomogeneous problems.

Wang, L.; Yang, J.; Modest, M. F.; Haworth, D. C.

2007-03-01

229

A modular method to handle multiple time-dependent quantities in Monte Carlo simulations

NASA Astrophysics Data System (ADS)

A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call Time Features. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method.

Shin, J.; Perl, J.; Schümann, J.; Paganetti, H.; Faddegon, B. A.

2012-06-01

230

Forwards and Backwards Modelling of Ashfall Hazards in New Zealand by Monte Carlo Methods

NASA Astrophysics Data System (ADS)

We have developed a technique for quantifying the probability of particular thicknesses of airfall ash from a volcanic eruption at any given site, using Monte Carlo methods, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo formulation then allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. We show how this method can handle the effects of multiple volcanic sources by aggregation, each source with its own characteristics. This follows a similar procedure which we have used for earthquake hazard assessment. The result is estimates of the frequency with which any given depth of ash is likely to be deposited at the selected site, accounting for all volcanoes that might affect it. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from ash cores in Auckland can give useful bounds for the likely total volumes erupted from the volcano Mt Egmont/Mt Taranaki, 280 km away, during the last 140,000 years, information difficult to obtain from local tephra stratigraphy.

Hurst, T.; Smith, W. D.; Bibby, H. M.

2003-12-01

231

Determination of phase equilibria in confined systems by open pore cell Monte Carlo method.

We present a modification of the molecular dynamics simulation method with a unit pore cell with imaginary gas phase [M. Miyahara, T. Yoshioka, and M. Okazaki, J. Chem. Phys. 106, 8124 (1997)] designed for determination of phase equilibria in nanopores. This new method is based on a Monte Carlo technique and it combines the pore cell, opened to the imaginary gas phase (open pore cell), with a gas cell to measure the equilibrium chemical potential of the confined system. The most striking feature of our new method is that the confined system is steadily led to a thermodynamically stable state by forming concave menisci in the open pore cell. This feature of the open pore cell makes it possible to obtain the equilibrium chemical potential with only a single simulation run, unlike existing simulation methods, which need a number of additional runs. We apply the method to evaluate the equilibrium chemical potentials of confined nitrogen in carbon slit pores and silica cylindrical pores at 77 K, and show that the results are in good agreement with those obtained by two conventional thermodynamic integration methods. Moreover, we also show that the proposed method can be particularly useful for determining vapor-liquid and vapor-solid coexistence curves and the triple point of the confined system. PMID:23464174

Miyahara, Minoru T; Tanaka, Hideki

2013-02-28

232

Determination of phase equilibria in confined systems by open pore cell Monte Carlo method

NASA Astrophysics Data System (ADS)

We present a modification of the molecular dynamics simulation method with a unit pore cell with imaginary gas phase [M. Miyahara, T. Yoshioka, and M. Okazaki, J. Chem. Phys. 106, 8124 (1997)] designed for determination of phase equilibria in nanopores. This new method is based on a Monte Carlo technique and it combines the pore cell, opened to the imaginary gas phase (open pore cell), with a gas cell to measure the equilibrium chemical potential of the confined system. The most striking feature of our new method is that the confined system is steadily led to a thermodynamically stable state by forming concave menisci in the open pore cell. This feature of the open pore cell makes it possible to obtain the equilibrium chemical potential with only a single simulation run, unlike existing simulation methods, which need a number of additional runs. We apply the method to evaluate the equilibrium chemical potentials of confined nitrogen in carbon slit pores and silica cylindrical pores at 77 K, and show that the results are in good agreement with those obtained by two conventional thermodynamic integration methods. Moreover, we also show that the proposed method can be particularly useful for determining vapor-liquid and vapor-solid coexistence curves and the triple point of the confined system.

Miyahara, Minoru T.; Tanaka, Hideki

2013-02-01

233

Quantum Monte Carlo method for the ground state of many-boson systems.

We formulate a quantum Monte Carlo (QMC) method for calculating the ground state of many-boson systems. The method is based on a field-theoretical approach, and is closely related to existing fermion auxiliary-field QMC methods which are applied in several fields of physics. The ground-state projection is implemented as a branching random walk in the space of permanents consisting of identical single-particle orbitals. Any single-particle basis can be used, and the method is in principle exact. We illustrate this method with a trapped atomic boson gas, where the atoms interact via an attractive or repulsive contact two-body potential. We choose as the single-particle basis a real-space grid. We compare with exact results in small systems and arbitrarily sized systems of untrapped bosons with attractive interactions in one dimension, where analytical solutions exist. We also compare with the corresponding Gross-Pitaevskii (GP) mean-field calculations for trapped atoms, and discuss the close formal relation between our method and the GP approach. Our method provides a way to systematically improve upon GP while using the same framework, capturing interaction and correlation effects with a stochastic, coherent ensemble of noninteracting solutions. We discuss various algorithmic issues, including importance sampling and the back-propagation technique for computing observables, and illustrate them with numerical studies. We show results for systems with up to N approximately 400 bosons. PMID:15600791

Purwanto, Wirawan; Zhang, Shiwei

2004-11-09

234

Parallel domain decomposition methods in fluid models with Monte Carlo transport

To examine the domain decomposition code coupled Monte Carlo-finite element calculation, it is important to use a domain decomposition that is suitable for the individual models. We have developed a code that simulates a Monte Carlo calculation ( ) on a massively parallel processor. This code is used to examine the load balancing behavior of three domain decomposition ( ) for a Monte Carlo calculation. Results are presented.

Alme, H.J.; Rodrigues, G.H. [California Univ., Davis, CA (United States). Dept. of Applied Science; Zimmerman, G.B. [Lawrence Livermore National Lab., CA (United States)

1996-12-01

235

A Monte Carlo Method for Projecting Uncertainty in 2D Lagrangian Trajectories

NASA Astrophysics Data System (ADS)

In this study, a novel method is proposed for modeling the propagation of uncertainty due to subgrid-scale processes through a Lagrangian trajectory advected by ocean surface velocities. The primary motivation and application is differentiating between active and passive trajectories for sea turtles as observed through satellite telemetry. A spatiotemporal launch box is centered on the time and place of actual launch and populated with launch points. Synthetic drifters are launched at each of these locations, adding, at each time step along the trajectory, Monte Carlo perturbations in velocity scaled to the natural variability of the velocity field. The resulting trajectory cloud provides a dynamically evolving density field of synthetic drifter locations that represent the projection of subgrid-scale uncertainty out in time. Subsequently, by relaunching synthetic drifters at points along the trajectory, plots are generated in a daisy chain configuration of the most likely passive pathways for the drifter.

Robel, A.; Lozier, S.; Gary, S. F.

2009-12-01

236

Cascade summing corrections for HPGe spectrometers by the Monte Carlo method.

Cascade summing corrections for application in HPGe gamma ray spectrometry have been calculated numerically by the Monte Carlo method. An algorithm has been developed which follows the path in the decay scheme from the starting state at the precursor radionuclide decay level, down to the ground state of the daughter radionuclide. With this procedure, it was possible to calculate the cascade summing correction for all gamma ray transitions present in the decay scheme. Since the cascade correction requires the values of peak and total detection efficiencies, another code has been developed in order to estimate these parameters for point and cylindrical sources. The radionuclides 60Co, 133Ba and 131I were used for testing the procedure. The results were in good agreement with values in the literature. PMID:11839001

Dias, Mauro S; Takeda, Mauro N; Koskinas, Marina F

237

NASA Astrophysics Data System (ADS)

The available light in the atmosphere of Titan has been calculated using a Monte Carlo method. The ``recommended'' temperature profile derived by Lellouch /1/, the aerosol distribution from McKay /2/ and their properties from Khare /3/, and an atmospheric composition from the new photochemical model of Toublanc /4/ are used in the framework of plane parallel geometry to compute all processes that occur in the atmosphere. The solar flux at each level in the atmosphere in the wavelength range 10-1000 nm is estimated. This paper improves 2D models including multiple scattering by molecules and aerosols in a 3D model. Assuming spherical symmetry, the atmosphere is divided into spherical layers of equal optical depth tau = 0.01. The solar flux is represented by 10^5-10^6 photons cm-2. The absorption and diffusion by aerosols and gases is treated exactly.

Brillet, J.; Parisot, J. P.; Dobrijevic, M.; Leflochmoen, E.; Toublanc, D.

238

Pairing in Cold Atoms and other Applications for Quantum Monte Carlo methods

We discuss the importance of the fermion nodes for the quantum Monte Carlo (QMC) methods and find two cases of the exact nodes. We describe the structure of the gen- eralized pairing wave functions in Pfaffian antisymmetric form and demonstrate their equivalency with certain class of configuration interaction wave functions. We present the QMC calculations of a model fermion system at unitary limit. We find the system to have the energy of E = 0.425Efree and the condensate fraction of = 0.48. Further we also perform the QMC calculations of the potential energy surface and the electric dipole moment along that surface of the LiSr molecule. We estimate the vibrationally averaged dipole moment to be D =0 = 0.4(2).

Bajdich, Michal [ORNL; Kolorenc, Jindrich [ORNL; Mitas, Lubos [ORNL; Reynolds, P. J. [United States Army Research Office, Durham, North Carolina

2010-01-01

239

This work illustrates a methodology based on photon interrogation and coincidence counting for determining the characteristics of fissile material. The feasibility of the proposed methods was demonstrated using a Monte Carlo code system to simulate the full statistics of the neutron and photon field generated by the photon interrogation of fissile and non-fissile materials. Time correlation functions between detectors were simulated for photon beam-on and photon beam-off operation. In the latter case, the correlation signal is obtained via delayed neutrons from photofission, which induce further fission chains in the nuclear material. An analysis methodology was demonstrated based on features selected from the simulated correlation functions and on the use of artificial neural networks. We show that the methodology can reliably differentiate between highly enriched uranium and plutonium. Furthermore, the mass of the material can be determined with a relative error of about 12%. Keywords: MCNP, MCNP-PoliMi, Artificial neural network, Correlation measurement, Photofission

Pozzi, Sara A [ORNL; Downar, Thomas J [ORNL; Padovani, Enrico [Nuclear Engineering Department Politecnico di Milano, Milan, Italy; Clarke, Shaun D [ORNL

2006-01-01

240

This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which attempts to achieve uniform statistical uncertainty throughout a designated problem space. The MC DD development is being implemented in conjunction with the Denovo deterministic radiation transport package to have direct access to the 3-D, massively parallel discrete-ordinates solver (to support the hybrid method) and the associated parallel routines and structure. This paper describes the hybrid method, its implementation, and initial testing results for a realistic 2-D quarter core pressurized-water reactor model and also describes the MC DD algorithm and its implementation.

Wagner, John C [ORNL; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL; Peplow, Douglas E. [ORNL; Turner, John A [ORNL

2011-01-01

241

We study a class of methods for the numerical solution of the system of stochastic differential equations (SDEs) that arises in the modeling of turbulent combustion, specifically in the Monte Carlo particle method for the solution of the model equations for the composition probability density function (PDF) and the filtered density function (FDF). This system consists of an SDE for

Haifeng Wang; Pavel P. Popov; Stephen B. Pope

2010-01-01

242

Using a semi-guided Monte Carlo method for faster simulation of forced outages of generating units

In recent years, a number of utilities have started using detailed chronological production-costing programs for mid-term planning studies. Even though the chronological method has gained popularity because of its ability to capture operating realism and provide detailed hourly results, the issue of how to best model forced outages has not been fully resolved. This paper presents a semi-guided Monte Carlo method that can be used in chronological production-costing programs to create statistically balanced forced outage schedules. As the test results for the Electricity Supply Board show, this technique can greatly reduce the number of Monte Carlo runs required to produce reliable production cost results.

Scully, A.; Harput, A. (Electricity Supply Board, Dublin (IE)); Le, K.D.; Day, J.T.; Malone, M.J.; Mousseau, T.E. (Advanced Systems Technology, ABB Power Systems, Inc., Pittsburgh, PA (US))

1992-08-01

243

To evaluate the bootstrap current in nonaxisymmetric toroidal plasmas quantitatively, a {delta}f Monte Carlo method is incorporated into the moment approach. From the drift-kinetic equation with the pitch-angle scattering collision operator, the bootstrap current and neoclassical conductivity coefficients are calculated. The neoclassical viscosity is evaluated from these two monoenergetic transport coefficients. Numerical results obtained by the {delta}f Monte Carlo method for a model heliotron are in reasonable agreement with asymptotic formulae and with the results obtained by the variational principle.

Matsuyama, A. [Graduate School of Energy Science, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Isaev, M. Yu. [Nuclear Fusion Institute, RRC Kurchatov Institute, 123182 Moscow (Russian Federation); Watanabe, K. Y.; Suzuki, Y.; Nakajima, N. [National Institute for Fusion Science, Toki, Gifu 509-5292 (Japan); Hanatani, K. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Cooper, W. A.; Tran, T. M. [Centre de Recherches en Physique des Plasmas, Association Euratom-Suisse, Ecole Polytechnique Federale de Lausanne, CH1015 Lausanne (Switzerland)

2009-05-15

244

Monte Carlo (MC) simulations are frequently used to simulate the radial distribution of remitted fluorescence light from tissue surfaces upon pencil beam excitation to gather information about influences of different tissue parameters. Here, the "weighted direct emission method" (WDEM) is proposed, which uses a weighted MC simulation approach for both excitation and fluorescence photons, and is compared to four other methods in terms of accuracy and speed, and using a broad range of tissue-relevant optical parameters. The WDEM is 5.2× faster on average than a fixed weight MC approach while still preserving its accuracy. Additional gain of speed can be achieved by implementing it on graphics processing units. PMID:23400069

Hennig, Georg; Stepp, Herbert; Sroka, Ronald; Beyer, Wolfgang

2013-02-10

245

The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations by exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.

Zou Yu, E-mail: yzou@Princeton.ED [Department of Chemical Engineering, Princeton University, Princeton, NJ 08544 (United States); Kavousanakis, Michail E., E-mail: mkavousa@Princeton.ED [Department of Chemical Engineering, Princeton University, Princeton, NJ 08544 (United States); Kevrekidis, Ioannis G., E-mail: yannis@Princeton.ED [Department of Chemical Engineering, Princeton University, Princeton, NJ 08544 (United States); Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544 (United States); Fox, Rodney O., E-mail: rofox@iastate.ed [Department of Chemical and Biological Engineering, Iowa State University, Ames, IA 50011 (United States)

2010-07-20

246

Hybrid Monte Carlo-Diffusion Method For Light Propagation in Tissue With a Low-Scattering Region

NASA Astrophysics Data System (ADS)

The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.

Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji

2003-06-01

247

Hybrid Monte Carlo-diffusion method for light propagation in tissue with a low-scattering region.

The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method. PMID:12790437

Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji

2003-06-01

248

This work describes a fast Monte Carlo Machine for dose calculation in radiotherapy treatment planning on FPGA based hardware. When performing Monte Carlo simulations of the radiation dose delivered to the human body, the Compton interaction is simulated. The inputs to the system are the energy and the normalized direction vectors of the incoming photon. The energy and the direction

Viviana Fanti; Roberto Marzeddu; Callisto Pili; Paolo Randaccio; Sabyasachi Siddhanta; Jenny Spiga; Artur Szostak

2009-01-01

249

At the present time a Monte Carlo transport computer code is being designed and implemented at Lawrence Livermore National Laboratory to include the transport of: neutrons, photons, electrons and light charged particles as well as the coupling between all species of particles, e.g., photon induced electron emission. Since this code is being designed to handle all particles this approach is called the ''All Particle Method''. The code is designed as a test bed code to include as many different methods as possible (e.g., electron single or multiple scattering) and will be data driven to minimize the number of methods and models ''hard wired'' into the code. This approach will allow changes in the Livermore nuclear and atomic data bases, used to described the interaction and production of particles, to be used to directly control the execution of the program. In addition this approach will allow the code to be used at various levels of complexity to balance computer running time against the accuracy requirements of specific applications. This paper describes the current design philosophy and status of the code. Since the treatment of neutrons and photons used by the All Particle Method code is more or less conventional, emphasis in this paper is placed on the treatment of electron, and to a lesser degree charged particle, transport. An example is presented in order to illustrate an application in which the ability to accurately transport electrons is important. 21 refs., 1 fig.

Cullen, D.E.; Perkins, S.T.; Plechaty, E.F.; Rathkopf, J.A.

1988-06-01

250

Modeling diffusion and phase transitions by a uniform-acceptance force-bias Monte Carlo method

NASA Astrophysics Data System (ADS)

The uniform-acceptance force-bias Monte Carlo (UFMC) method [G. Dereli, Mol. Simul. 8, 351 (1992)] is a little-used atomistic simulation method that has strong potential as alternative or complementary technique to molecular dynamics (MD). We have applied UFMC to surface diffusion, amorphization, melting, glass transition, and crystallization, mainly of silicon. The purpose is to study the potential and the limitations of the method: to investigate its applicability, determine safe and effective values of the two UFMC parametersa temperature and a maximum allowed atomic displacement per iteration stepthat lead to reliable results for different types of simulations, assess the computational speed increase relative to MD, discover the microscopic mechanisms that make UFMC work, and show in what kind of simulations it can be useful and preferable over MD. It is found that in many simulations, UFMC can be a very efficient alternative to MD: it leads to analogous results in much fewer iteration steps. Due to the straightforward formalism of UFMC, it can be easily implemented in any MD code. Thus both methods can be combined and applied in turn, using UFMC for the acceleration of certain processes and MD for keeping precision and monitoring individual atom trajectories.

Timonova, Maria; Groenewegen, Jasper; Thijsse, Barend J.

2010-04-01

251

Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

NASA Astrophysics Data System (ADS)

An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5 3.3 times, based on the limited number of cases in the present study.

Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

2008-06-01

252

Variational method for estimating the rate of convergence of Markov-chain Monte Carlo algorithms.

We demonstrate the use of a variational method to determine a quantitative lower bound on the rate of convergence of Markov chain Monte Carlo (MCMC) algorithms as a function of the target density and proposal density. The bound relies on approximating the second largest eigenvalue in the spectrum of the MCMC operator using a variational principle and the approach is applicable to problems with continuous state spaces. We apply the method to one dimensional examples with Gaussian and quartic target densities, and we contrast the performance of the random walk Metropolis-Hastings algorithm with a "smart" variant that incorporates gradient information into the trial moves, a generalization of the Metropolis adjusted Langevin algorithm. We find that the variational method agrees quite closely with numerical simulations. We also see that the smart MCMC algorithm often fails to converge geometrically in the tails of the target density except in the simplest case we examine, and even then care must be taken to choose the appropriate scaling of the deterministic and random parts of the proposed moves. Again, this calls into question the utility of smart MCMC in more complex problems. Finally, we apply the same method to approximate the rate of convergence in multidimensional Gaussian problems with and without importance sampling. There we demonstrate the necessity of importance sampling for target densities which depend on variables with a wide range of scales. PMID:18999558

Casey, Fergal P; Waterfall, Joshua J; Gutenkunst, Ryan N; Myers, Christopher R; Sethna, James P

2008-10-20

253

Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

NASA Astrophysics Data System (ADS)

Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.

Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

2011-12-01

254

NASA Astrophysics Data System (ADS)

We report the calculation of the coexisting densities and surface tensions of the liquid-vapor equilibrium using the multibody dissipative particle dynamics and Monte Carlo (MMC) methods. We focus on the calculation of the surface tension by using the thermodynamic and mechanical routes. It is the first time that the test-area method is applied on the many-body conservative potential. We discuss the mechanical equilibrium of these two-phase systems by analyzing the profiles of the normal and tangential components of the pressure tensor using the Irving-Kirkwood and Kirkwood-Buff approaches. The profile of the configurational temperature is shown to establish the thermal equilibrium of these two-phase simulations carried out with large time steps. We complete this study to show the impact of the range of the many-body repulsive term of the conservative force on the surface tension. We conclude that the MMC method is an efficient sampling scheme to compute the interfacial properties of liquid-vapor interfaces using the multibody soft potential.

Ghoufi, A.; Malfreyt, P.

2010-07-01

255

We report the calculation of the coexisting densities and surface tensions of the liquid-vapor equilibrium using the multibody dissipative particle dynamics and Monte Carlo (MMC) methods. We focus on the calculation of the surface tension by using the thermodynamic and mechanical routes. It is the first time that the test-area method is applied on the many-body conservative potential. We discuss the mechanical equilibrium of these two-phase systems by analyzing the profiles of the normal and tangential components of the pressure tensor using the Irving-Kirkwood and Kirkwood-Buff approaches. The profile of the configurational temperature is shown to establish the thermal equilibrium of these two-phase simulations carried out with large time steps. We complete this study to show the impact of the range of the many-body repulsive term of the conservative force on the surface tension. We conclude that the MMC method is an efficient sampling scheme to compute the interfacial properties of liquid-vapor interfaces using the multibody soft potential. PMID:20866760

Ghoufi, A; Malfreyt, P

2010-07-19

256

NASA Astrophysics Data System (ADS)

One of the most useful tools for modelling rarefied hypersonic flows is the Direct Simulation Monte Carlo (DSMC) method. Simulator particle movement and collision calculations are combined with statistical procedures to model thermal non-equilibrium flow-fields described by the Boltzmann equation. The Macroscopic Chemistry Method for DSMC simulations was developed to simplify the inclusion of complex thermal non-equilibrium chemistry. The macroscopic approach uses statistical information which is calculated during the DSMC solution process in the modelling procedures. Here it is shown how inclusion of macroscopic information in models of chemical kinetics, electronic excitation, ionization, and radiation can enhance the capabilities of DSMC to model flow-fields where a range of physical processes occur. The approach is applied to the modelling of a 6.4 km/s nitrogen shock wave and results are compared with those from existing shock-tube experiments and continuum calculations. Reasonable agreement between the methods is obtained. The quality of the comparison is highly dependent on the set of vibrational relaxation and chemical kinetic parameters employed.

Goldsworthy, M. J.

2012-10-01

257

Bridging the gap between quantum Monte Carlo and F12-methods

NASA Astrophysics Data System (ADS)

Tensor product approximation of pair-correlation functions opens a new route from quantum Monte Carlo (QMC) to explicitly correlated F12 methods. Thereby one benefits from stochastic optimization techniques used in QMC to get optimal pair-correlation functions which typically recover more than 85% of the total correlation energy. Our approach incorporates, in particular, core and core-valence correlation which are poorly described by homogeneous and isotropic ansatz functions usually applied in F12 calculations. We demonstrate the performance of the tensor product approximation by applications to atoms and small molecules. It turns out that the canonical tensor format is especially suitable for the efficient computation of two- and three-electron integrals required by explicitly correlated methods. The algorithm uses a decomposition of three-electron integrals, originally introduced by Boys and Handy and further elaborated by Ten-no in his 3d numerical quadrature scheme, which enables efficient computations in the tensor format. Furthermore, our method includes the adaptive wavelet approximation of tensor components where convergence rates are given in the framework of best N-term approximation theory.

Chinnamsetty, Sambasiva Rao; Luo, Hongjun; Hackbusch, Wolfgang; Flad, Heinz-Jürgen; Uschmajew, André

2012-06-01

258

NASA Astrophysics Data System (ADS)

A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity embedded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach to this optical geometry is firmly justified, because, as we show, in the conjugate image plane the field reflected from the sample is delta-correlated from which it follows that the heterodyne signal is calculated from the intensity distribution only. This is not a trivial result because, in general, the light from the sample will have a finite spatial coherence that cannot be accounted for by MC simulation. To estimate this intensity distribution adequately we have developed a novel method for modeling a focused Gaussian beam in MC simulation. This approach is valid for a softly as well as for a strongly focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently published analytical model based on the extended Huygens-Fresnel principle are compared, excellent agreement is found.

Tycho, Andreas; Jųrgensen, Thomas M.; Yura, Harold T.; Andersen, Peter E.

2002-11-01

259

Probability-Weighted Dynamic Monte Carlo Method for Reaction Kinetics Simulations

The reaction kinetics underlying the dynamic features of physical systems can be investigated using various approaches such as the dynamic Monte Carlo (DMC) method. Up to now, the usefulness of the DMC method to study reaction kinetics has been limited to systems where the underlying reactions occur with similar frequencies, i.e., similar rate constants. However, many interesting physical phenomena involve sets of reactions with a wide range of rate constants leading to a broad range of relevant time scales. Widely varying reaction rates result in a highly skewed reaction occurrence probability distribution. When the reaction occurrence probability distribution has a wide spectrum, the reactions with faster rates dominate the computations making the reliable statistical sampling cumbersome. We have developed a probability weighted DMC method by incorporating the preferential sampling algorithm of equilibrium molecular simulations. This new algorithm samples the slow reactions very efficiently and makes it possible to simulate the reaction kinetics of physical systems in which the rates of reactions vary by several orders of magnitude in a computationally efficient manner. We validate the probability weighted DMC algorithm by applying it to a model of vesicular trafficking in living cells.

Resat, Haluk (BATTELLE (PACIFIC NW LAB)); Wiley, H S. (Battelle (Pacific NW Lab)); Dixon, David A. (BATTELLE (PACIFIC NW LAB))

2000-11-01

260

A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity embedded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach to this optical geometry is firmly justified, because, as we show, in the conjugate image plane the field reflected from the sample is delta-correlated from which it follows that the heterodyne signal is calculated from the intensity distribution only. This is not a trivial result because, in general, the light from the sample will have a finite spatial coherence that cannot be accounted for by MC simulation. To estimate this intensity distribution adequately we have developed a novel method for modeling a focused Gaussian beam in MC simulation. This approach is valid for a softly as well as for a strongly focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently published analytical model based on the extended Huygens-Fresnel principle are compared, excellent agreement is found. PMID:12412659

Tycho, Andreas; Jųrgensen, Thomas M; Yura, Harold T; Andersen, Peter E

2002-11-01

261

The DPM (Dose Planning Method) Monte Carlo electron and photon transport program, designed for fast computation of radiation absorbed dose in external beam radiotherapy, has been adapted to the calculation of absorbed dose in patient-specific internal emitter therapy. Because both its photon and electron transport mechanics algorithms have been optimized for fast computation in 3D voxelized geometries (in particular, those

S. J. Wilderman; Y. K. Dewaraja

2007-01-01

262

The purpose of this work was to extend the verification of Monte Carlo based methods for estimating radiation dose in computed tomography (CT) exams beyond a single CT scanner to a multidetector CT (MDCT) scanner, and from cylindrical CTDI phantom measurements to both cylindrical and physical anthropomorphic phantoms. Both cylindrical and physical anthropomorphic phantoms were scanned on an MDCT under

J. J. DeMarco; C. H. Cagnon; D. D. Cody; D. M. Stevens; C. H. McCollough; J. O'Daniel; M. F. McNitt-Gray

2005-01-01

263

Research efforts towards developing a new method for calibrating in vivo measurement systems using magnetic resonance imaging (MRI) and Monte Carlo computations are discussed. The method employs the enhanced three-point Dixon technique for producing pure fat and pure water MR images of the human body. The MR images are used to define the geometry and composition of the scattering media for transport calculations using the general-purpose Monte Carlo code MCNP, Version 4. A sample case for developing the new method utilizing an adipose/muscle matrix is compared with laboratory measurements. Verification of the integrated MRI-MCNP method has been done for a specially designed phantom composed of fat, water, air, and a bone-substitute material. Implementation of the MRI-MCNP method is demonstrated for a low-energy, lung counting in vivo measurement system. Limitations and solutions regarding the presented method are discussed. 15 refs., 7 figs., 4 tabs.

Mallett, M.W.; Poston, J.W. [Texas A& M Univ., College Station, TX (United States); Hickman, D.P. [Los Alamos National Laboratory, NM (United States)] [and others

1995-06-01

264

Research efforts towards developing a new method for calibrating in vivo measurement systems using magnetic resonance imaging (MRI) and Monte Carlo computations are discussed. The method employs the enhanced three-point Dixon technique for producing pure fat and pure water MR images of the human body. The MR images are used to define the geometry and composition of the scattering media for transport calculations using the general-purpose Monte Carlo code MCNP, Version 4. A sample case for developing the new method utilizing an adipose/muscle matrix is compared with laboratory measurements. Verification of the integrated MRI-MCNP method has been done for a specially designed phantom composed of fat, water, air, and a bone-substitute material. Implementation of the MRI-MCNP method is demonstrated for a low-energy, lung counting in vivo measurement system. Limitations and solutions regarding the presented method are discussed. PMID:7759255

Mallett, M W; Hickman, D P; Kruchten, D A; Poston, J W

1995-06-01

265

Quantum Monte Carlo method using phase-free random walks with Slater determinants

NASA Astrophysics Data System (ADS)

Without an exact solution to the sign/phase problem, reducing the reliance on trial wave functions is clearly of key importance to increasing the predictive power of QMC. We have developed a quantum Monte Carlo method [1] using Hubbard-Stratonovich auxiliary fields for many fermions that allows the use of any one-particle basis. It projects out the ground state by random walks in the space of Slater determinants. An approximate approach is formulated to control the phase problem with a trial wave function. For periodic systems the formalism allows arbitrary k-point sampling with complex trial wave functions. Using a plane-wave basis and non-local pseudopotentials, we apply the method to Si atom, dimer, and 2, 16, 54 atom (216 electrons) bulk supercells. Single Slater determinant wave functions from density functional theory calculations were used as the trial wave function with no additional optimization. The calculated dissociation energy of the Si dimer molecule and the cohesive energy of bulk Si are in excellent agreement with experiment and are comparable to the best existing theoretical results. Results for other systems will also be presented. [1] S. Zhang and H. Krakauer, cond-mat/0208340.

Zhang, Shiwei; Krakauer, Henry

2003-03-01

266

Monte-Carlo Methods in Studies of Protein Folding and Evolution

NASA Astrophysics Data System (ADS)

As was noted in recent review [1] studies in molecular biophysics (e.g. protein folding and evolution) underwent cyclic development. Initially protein folding was viewed as strictly experimental field belonging to realm of biochemistry where each protein is viewed as a unique system that requires its own detailed characterization - akin to any mechanism in biology. Introduction, in early nineties, of simplified models to the field and their success in explaining several key aspects of protein folding such as two-state folding of many proteins, nucleation mechanism and its relation to native state topology, has pretty much shifted thinking towards views motivated by physics. The "physics"- centered approach focuses on statistical mechanical aspect of folding problem by emphasizing universality of folding scenarios over uniqueness of folding pathways for each protein. This approach dominated theoretical thinking in the last decade (reviewed in [1-4] and its successes brought theory and experiment closer together transforming the protein folding field from branch of biochemistry to a truly interdisciplinary enterprise where physics, chemistry and biology meet. An important contribution to success of statistical-mechanical paradigm in protein folding was the use of stochastic methods, most notably Monte-Carlo. Indeed it was understood early on (since Levinthal formulated his famous argument about protein folding) that significant aspect of protein folding involves massive search in conformational space and stochastic methods would be a natural choice to achieve that goal.

Shakhnovich, E.

267

The direct simulation Monte Carlo method using unstructured adaptive mesh and its application

NASA Astrophysics Data System (ADS)

The implementation of an adaptive mesh-embedding (h-refinement) scheme using unstructured grid in two-dimensional direct simulation Monte Carlo (DSMC) method is reported. In this technique, local isotropic refinement is used to introduce new mesh where the local cell Knudsen number is less than some preset value. This simple scheme, however, has several severe consequences affecting the performance of the DSMC method. Thus, we have applied a technique to remove the hanging node, by introducing the an-isotropic refinement in the interfacial cells between refined and non-refined cells. Not only does this remedy increase a negligible amount of work, but it also removes all the difficulties presented in the originals scheme. We have tested the proposed scheme for argon gas in a high-speed driven cavity flow. The results show an improved flow resolution as compared with that of un-adaptive mesh. Finally, we have used triangular adaptive mesh to compute a near-continuum gas flow, a hypersonic flow over a cylinder. The results show fairly good agreement with previous studies. In summary, the proposed simple mesh adaptation is very useful in computing rarefied gas flows, which involve both complicated geometry and highly non-uniform density variations throughout the flow field. Copyright

Wu, J.-S.; Tseng, K.-C.; Kuo, C.-H.

2002-02-01

268

In previous work, exponential convergence of Monte Carlo solutions using the reduced source method with Legendre expansion has been achieved only in one-dimensional rod and slab geometries. In this paper, the method is applied to three-dimensional (right parallelepiped) problems, with resulting evidence suggesting success. As implemented in this paper, the method approximates an angular integral of the flux with a discrete-ordinates numerical quadrature. It is possible that this approximation introduces an inconsistency that must be addressed.

Favorite, J.A.

1999-09-01

269

We propose a Monte Carlo method to study the reaction paths in nucleosynthesis during stellar evolution. Determination of reaction paths is important to obtain the physical picture of stellar evolution. The combination of network calculation and our method gives us a better understanding of physical picture. We apply our method to the case of the helium shell flash model in the extremely metal poor star.

Yamamoto, K.; Hashizume, K.; Wada, T.; Ohta, M.; Suda, T. [Department of Physics, Konan University, 8-9-1 Okamoto, Kobe 653-8501 (Japan); Nishimura, T.; Fujimoto, M. Y.; Kato, K. [Division of Science, Hokkaido University, Sapporo 060-0810 (Japan); Meme Media Laboratory, Hokkaido University, Sapporo 060-0813 (Japan); Aikawa, M. [Institut d'Astronomie et d'Astrophysique, C.P.226, Universite Libre de Bruxelles, B-1050 Brussels (Belgium)

2006-07-12

270

Variational Monte Carlo Methods for Strongly Correlated Quantum Systems on Multileg Ladders

NASA Astrophysics Data System (ADS)

Quantum mechanical systems of strongly interacting particles in two dimensions comprise a realm of condensed matter physics for which there remain many unanswered theoretical questions. In particular, the most formidable challenges may lie in cases where the ground states show no signs of ordering, break no symmetries, and support many gapless excitations. Such systems are known to exhibit exotic, disordered ground states that are notoriously difficult to study analytically using traditional perturbation techniques or numerically using the most recent methods (e.g., tensor network states) due to the large amount of spatial entanglement. Slave particle descriptions provide a glimmer of hope in the attempt to capture the fundamental, low-energy physics of these highly non-trivial phases of matter. To this end, this dissertation describes the construction and implementation of trial wave functions for use with variational Monte Carlo techniques that can easily model slave particle states. While these methods are extremely computationally tractable in two dimensions, we have applied them here to quasi-one-dimensional systems so that the results of other numerical techniques, such as the density matrix renormalization group, can be directly compared to those determined by the trial wave functions and so that exclusively one-dimensional analytic approaches, namely bosonization, can be employed. While the focus here is on the use of variational Monte Carlo, the sum of these different numerical and analytical tools has yielded a remarkable amount of insight into several exotic quantum ground states. In particular, the results of research on the d-wave Bose liquid phase, an uncondensed state of strongly correlated hard-core bosons living on the square lattice whose wave function exhibits a d-wave sign structure, and the spin Bose-metal phase, a spin-1/2, SU(2) invariant spin liquid of strongly correlated spins living on the triangular lattice, will be presented. Both phases support gapless excitations along surfaces in momentum space in two spatial dimensions and at incommensurate wave vectors in quasi-one dimension, where we have studied them on three- and four-leg ladders. An extension of this work to the study of d-wave correlated itinerant electrons will be discussed.

Block, Matthew S.

271

Quantum Monte Carlo Method for Materials --- Random Walks in Slater Determinant Space

NASA Astrophysics Data System (ADS)

In order to reliably predict materials properties, it is critical to have accurate and robust calculations at the most fundamental level. Often the desired effects of the materials originate from electron interaction and correlation effects, and small errors in treating such effects will result in crucial and qualitative differences in the properties. Density functional approaches, despite its tremendous success in allowing detailed microscopic calculations of a variety of materials, is not always reliable. We have developed a new quantum Monte Carlo (QMC) method [1] for treating electron correlations. Similar to existing QMC methods, it allows calculations of ground-state equilibrium properties in CPU times that scale as a power law with system size. In addition it allows direct incorporation of state-of-the-art techniques (non-local pseudopotentials; high quality basis sets) from the very best mean-field calculations into a true many-body framework. The method projects out the many-body ground state by random walks in the space of Slater determinants. An approximate approach is formulated to control the phase problem with a trial wave function. The method allows the use of any one-particle basis. Using a plane-wave basis and non-local pseudopotentials, we apply the method to Be, Si, P atoms and dimers, and to bulk Si with 2, 16, 54 atom (216 electrons) supercells. Single Slater determinant wave functions from density functional theory calculations were used as the trial wave function with no additional optimization. The calculated dissociation energy of the dimer molecules and the cohesive energy of bulk Si are in excellent agreement with experiment and are comparable to or better than the best existing theoretical results. {[1]} Shiwei Zhang and Henry Krakauer, Phys. Rev. Lett., 90, 136401 (2003).

Zhang, S.; Krakauer, H.

2003-12-01

272

A Sequential Monte Carlo Method for Bayesian Analysis of Massive Datasets

Markov chain Monte Carlo (MCMC) techniques revolutionized statistical practice in the 1990s by providing an essential toolkit for making the rigor and flexibility of Bayesian analysis computationally practical. At the same time the increasing prevalence of massive datasets and the expansion of the field of data mining has created the need for statistically sound methods that scale to these large problems. Except for the most trivial examples, current MCMC methods require a complete scan of the dataset for each iteration eliminating their candidacy as feasible data mining techniques. In this article we present a method for making Bayesian analysis of massive datasets computationally feasible. The algorithm simulates from a posterior distribution that conditions on a smaller, more manageable portion of the dataset. The remainder of the dataset may be incorporated by reweighting the initial draws using importance sampling. Computation of the importance weights requires a single scan of the remaining observations. While importance sampling increases efficiency in data access, it comes at the expense of estimation efficiency. A simple modification, based on the rejuvenation step used in particle filters for dynamic systems models, sidesteps the loss of efficiency with only a slight increase in the number of data accesses. To show proof-of-concept, we demonstrate the method on two examples. The first is a mixture of transition models that has been used to model web traffic and robotics. For this example we show that estimation efficiency is not affected while offering a 99% reduction in data accesses. The second example applies the method to Bayesian logistic regression and yields a 98% reduction in data accesses.

Ridgeway, Greg; Madigan, David

2009-01-01

273

NASA Astrophysics Data System (ADS)

A reverse Monte Carlo (RMC) method is developed to obtain the energy loss function (ELF) and optical constants from a measured reflection electron energy-loss spectroscopy (REELS) spectrum by an iterative Monte Carlo (MC) simulation procedure. The method combines the simulated annealing method, i.e., a Markov chain Monte Carlo (MCMC) sampling of oscillator parameters, surface and bulk excitation weighting factors, and band gap energy, with a conventional MC simulation of electron interaction with solids, which acts as a single step of MCMC sampling in this RMC method. To examine the reliability of this method, we have verified that the output data of the dielectric function are essentially independent of the initial values of the trial parameters, which is a basic property of a MCMC method. The optical constants derived for SiO2 in the energy loss range of 8-90 eV are in good agreement with other available data, and relevant bulk ELFs are checked by oscillator strength-sum and perfect-screening-sum rules. Our results show that the dielectric function can be obtained by the RMC method even with a wide range of initial trial parameters. The RMC method is thus a general and effective method for determining the optical properties of solids from REELS measurements.

Da, B.; Sun, Y.; Mao, S. F.; Zhang, Z. M.; Jin, H.; Yoshikawa, H.; Tanuma, S.; Ding, Z. J.

2013-06-01

274

3D dose distribution calculation in a voxelized human phantom by means of Monte Carlo method

The aim of this work is to provide the reconstruction of a real human voxelized phantom by means of a MatLab® program and the simulation of the irradiation of such phantom with the photon beam generated in a Theratron 780® (MDS Nordion) 60Co radiotherapy unit, by using the Monte Carlo transport code MCNP (Monte Carlo N-Particle), version 5. The project

V. Abella; R. Miró; B. Juste; G. Verdś

2010-01-01

275

Bead-Fourier path-integral Monte Carlo method applied to systems of identical particles

To make the path-integral Monte Carlo (PIMC) method more effective and practical in application to systems of identical particles with strong interactions, we introduce a combined bead-Fourier (BF) PIMC approach with the ordinary bead method and the Fourier PIMC method of Doll and Freeman [J. Chem. Phys. {bold 80}, 2239 (1984); {bold 80}, 5709 (1984)] being its extreme cases. Optimal choice of the number of beads and of Fourier components enables us to reproduce reliably the ground-state energy and electron density distribution in the H atom as well as the exact data for the harmonic oscillator. Applying the BF method to systems of identical particles we use the procedure of simultaneous accounting for all classes of permutations suggested in the previous work [Phys. Rev. A {bold 48}, 4075 (1993)] with subsequent symmetrization of the exchange factor in the weight function to make the sign problem milder. A procedure of random walk in the spin space enables us to obtain spin-dependent averages. We derived exact partition functions and canonical averages for a model system of N noninteracting identical particles (N=2,3,4,{hor_ellipsis}) with the spin (fermions or bosons) in a d-dimensional harmonic field (d=1,2,3) that provided a reliable test of the developed MC procedures. Simulations for N=2,3 reproduce well the exact dependencies. Further simulations showed how gradual switching on of the electrostatic repulsion between particles in this system results in significant weakening of the exchange effects. {copyright} {ital 1997} {ital The American Physical Society}

Vorontsov-Velyaminov, P.N.; Nesvit, M.O.; Gorbunov, R.I. [Faculty of Physics, St. Petersburg State University, 198904, St. Petersburg (Russia)

1997-02-01

276

Reduced Monte Carlo methods for the solution of stochastic groundwater flow problems

NASA Astrophysics Data System (ADS)

Reduced order modeling is often employed to decrease the computational cost of numerical solutions of parametric Partial Differential Equations. Reduced basis, balanced truncation, projections methods are among the most studied techniques to achieve model reduction. We study the applicability of snapshot-based Proper Orthogonal Decomposition (POD) to Monte Carlo (MC) simulations applied to the solution of the stochastic groundwater flow problem. POD model reduction is obtained by projecting the model equations onto a space generated by a small number of basis functions (principal components). These are obtained upon exploring the solution (probability) space with snapshots, i.e., system states obtained by solving the original process-based equations. The reduced model is then employed to complete the ensemble by adding multiple realizations. We apply this technique to a two dimensional simulation of steady state saturated groundwater flow, and explore the sensitivity of the method to the number of snapshots and associated principal components in terms of accuracy and efficiency of the overall MC procedure. In our preliminary results, we distinguish the problem of heterogeneous recharge, in which the stochastic term is confined to the forcing function (additive stochasticity), from the case of heterogeneous hydraulic conductivity, in which the stochastic term is multiplicative. In the first scenario, the linearity of the problem is fully exploited and the POD approach yields accurate and efficient realizations, leading to substantial speed up of the MC method. The second scenario poses a significant challenge, as the adoption of a few snapshots based on the full model does not provide enough variability in the reduced order replicates, thus leading to poor convergence of the MC method. We find that increasing the number of snapshots improves the convergence of MC but only for large integral scales of the log-conductivity field. The technique is then extended to take full advantage of the solution of moment differential equations of groundwater flow.

Pasetto, D.; Guadagnini, A.; Putti, M.

2012-04-01

277

Statistical analysis of chemical transformation kinetics using Markov-Chain Monte Carlo methods.

For the risk assessment of chemicals intentionally released into the environment, as, e.g., pesticides, it is indispensable to investigate their environmental fate. Main characteristics in this context are transformation rates and partitioning behavior. In most cases the relevant parameters are not directly measurable but are determined indirectly from experimentally determined concentrations in various environmental compartments. Usually this is done by fitting mathematical models, which are usually nonlinear, to the observed data and such deriving estimates of the parameter values. Statistical analysis is then used to judge the uncertainty of the estimates. Of particular interest in this context is the question whether degradation rates are significantly different from zero. Standard procedure is to use nonlinear least-squares methods to fit the models and to estimate the standard errors of the estimated parameters from Fisher's Information matrix and estimated level of measurement noise. This, however, frequently leads to counterintuitive results as the estimated probability distributions of the parameters based on local linearization of the optimized models are often too wide or at least differ significantly in shape from the real distribution. In this paper we identify the shortcoming of this procedure and propose a statistically valid approach based on Markov-Chain Monte Carlo sampling that is appropriate to determine the real probability distribution of model parameters. The effectiveness of this method is demonstrated on three data sets. Although it is generally applicable to different problems where model parameters are to be inferred, in the present case for simplicity we restrict the discussion to the evaluation of metabolic degradation of chemicals in soil. It is shown that the method is successfully applicable to problems of different complexity. We applied it to kinetic data from compounds with one and five metabolites. Additionally, using simulated data, it is shown that the MCMC method estimates the real probability distributions of parameters well and much better than the standard optimization approach. PMID:21526818

Görlitz, Linus; Gao, Zhenglei; Schmitt, Walter

2011-04-28

278

HRMC_1.1: Hybrid Reverse Monte Carlo method with silicon and carbon potentials

NASA Astrophysics Data System (ADS)

The Hybrid Reverse Monte Carlo (HRMC) code models the atomic structure of materials via the use of a combination of constraints including experimental diffraction data and an empirical energy potential. This energy constraint is in the form of either the Environment Dependent Interatomic Potential (EDIP) for carbon and silicon and the original and modified Stillinger-Weber potentials applicable to silicon. In this version, an update is made to correct an error in the EDIP carbon energy calculation routine.New version program summaryProgram title: HRMC version 1.1Catalogue identifier: AEAO_v1_1Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAO_v1_1.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 36?991No. of bytes in distributed program, including test data, etc.: 907?800Distribution format: tar.gzProgramming language: FORTRAN 77Computer: Any computer capable of running executables produced by the g77 Fortran compiler.Operating system: Unix, WindowsRAM: Depends on the type of empirical potential use, number of atoms and which constraints are employed.Classification: 7.7Catalogue identifier of previous version: AEAO_v1_0Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 777Does the new version supersede the previous version?: YesNature of problem: Atomic modelling using empirical potentials and experimental data.Solution method: Monte CarloReasons for new version: An error in a term associated with the calculation of energies using the EDIP carbon potential which results in incorrect energies.Summary of revisions: Fix to correct brackets in the two body part of the EDIP carbon potential routine.Additional comments: The code is not standard FORTRAN 77 but includes some additional features and therefore generates errors when compiled using the Nag95 compiler. It does compile successfully with the GNU g77 compiler (http://www.gnu.org/software/fortran/fortran.html).Running time: Depends on the type of empirical potential use, number of atoms and which constraints are employed. The test included in the distribution took 37 minutes on a DEC Alpha PC.

Opletal, G.; Petersen, T. C.; O'Malley, B.; Snook, I. K.; McCulloch, D. G.; Yarovsky, I.

2011-02-01

279

Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations

Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm{sup 2}) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within {+-}1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within {+-}2.0%({sigma}{sub MC}=0.8%) in lung ({rho}=0.24 g cm{sup -3}) and within {+-}2.9%({sigma}{sub MC}=0.8%) in low-density lung ({rho}=0.1 g cm{sup -3}). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity ({rho}=0.001 g cm{sup -3}) were found to be within the range of {+-}1.5% to {+-}4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark ({sigma}{sub MC}=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within {+-}2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is capable of modeling radiotherapy dose deposition with accuracy only previously achievable with Monte Carlo techniques.

Bush, K.; Gagne, I. M.; Zavgorodni, S.; Ansbacher, W.; Beckham, W. [Department of Medical Physics, British Columbia Cancer Agency-Vancouver Island Center, Victoria, British Columbia V8R 6V5 (Canada)

2011-04-15

280

Partial site occupancy structure of decagonal AlNiCo using Monte-Carlo methods

NASA Astrophysics Data System (ADS)

The structure of decagonal AlNiCo was modeled using quasilattice-gas Monte-Carlo methods(M. Milhalkovic et. al.,arXiv:cond-mat/0102085) (2001) with fixed ideal sites and realistic stoichiometry. Site occupancies and pair correlation functions were computed from the simulations to determine aluminum and transition metal atom concentrations at various sites. The results were compared to structures refined from experimental data.^2,3 The experimental patterson functionfootnotemark[2] finds the same positions for the near neighbours and the site occupanciesfootnotemark[3] show similar locations for Al and TM atoms. At some ideal sites a systematic depletion of Al occupancy was found. We found that certain ideal sites relax in directions in agreement with experiment,footnotemark[3] and the site occupancies for the relaxed structure are closer to experimental values. footnotetext[2]A. Cervellino, T. Haibach and W. Steurer (Preprint, 2001) footnotetext[3]H. Takakura et. al., Acta Cryst. A 57, 576-85 (2001)

Naidu, Siddartha; Widom, Mike; Mihalkovic, Marek

2002-03-01

281

NASA Astrophysics Data System (ADS)

The electronic structure and dielectric property in electronic ferroelectricity, where electric polarization is driven by an electronic charge order without inversion symmetry, are studied. Motivated by layered iron oxides, the roles of quantum fluctuation in ferroelectricity in a paired-triangular lattice are focused on. Three types of V-t model, where the intersite Coulomb interaction V and the electron transfer t for spinless fermions are taken into account, are examined by the variational Monte-Carlo method with the Gutzwiller-type correlation factor. It is shown that the electron transfer between the triangular layers corresponding to the interlayer polarization fluctuation promotes a three-fold charge order associated with electric polarization. This result is in high contrast to the usual result observed in the hydrogen-bond type ferroelectricities and quantum paraelectric oxides, where the ferroelectric order is suppressed by quantum fluctuation. The spin degree of freedom of electrons and a realistic interlayer geometry for layered iron oxides further stabilize the polar charge ordered state. The implications of the numerical results for layered iron oxides are discussed.

Watanabe, Tsutomu; Ishihara, Sumio

2010-11-01

282

Monte Carlo methods for radiative transfer in quasi-isothermal participating media

NASA Astrophysics Data System (ADS)

Based on the superposition principle, we propose in this study a Monte Carlo (MC) formulation for radiative transfer in quasi-isothermal media which consists in directly computing the difference between the actual radiative field and the equilibrium radiative field at the minimum temperature in the medium. This shift formulation is implemented for the prediction of radiative fluxes and volumetric powers in a combined free convection-radiation problem where a differentially heated cubical cavity is filled with air with a small amount of H2O and CO2. High resolution spectra are used to describe radiative properties of the gas in this 3D configuration. We show that, compared to the standard analog MC method, the shift approach leads to a huge saving of required computational times to reach a given convergence level. In addition, this approach is compared to reciprocal MC formulations and is shown to be more efficient for the prediction of wall fluxes but slightly less efficient for volumetric powers.

Soucasse, Laurent; Rivičre, Philippe; Soufiani, Anouar

2013-10-01

283

A kinetic Monte Carlo method for the simulation of massive phase transformations

A multi-lattice kinetic Monte Carlo method has been developed for the atomistic simulation of massive phase transformations. Beside sites on the crystal lattices of the parent and product phase, randomly placed sites are incorporated as possible positions. These random sites allow the atoms to take favourable intermediate positions, essential for a realistic description of transformation interfaces. The transformation from fcc to bcc starting from a flat interface with the fcc(1 1 1)//bcc(1 1 0) and fcc[1 1 1-bar]//bcc[0 0 1-bar] orientation in a single component system has been simulated. Growth occurs in two different modes depending on the chosen values of the bond energies. For larger fcc-bcc energy differences, continuous growth is observed with a rough transformation front. For smaller energy differences, plane-by-plane growth is observed. In this growth mode two-dimensional nucleation is required in the next fcc plane after completion of the transformation of the previous fcc plane.

Bos, C.; Sommer, F.; Mittemeijer, E.J

2004-07-12

284

In vivo simulation environment for fluorescence molecular tomography using Monte Carlo method

NASA Astrophysics Data System (ADS)

Optical sensing of specific molecular target using near-infrared light has been recognized to be the crucial technology, have changing human's future. The imaging of Fluorescence Molecular Tomography is the most novel technology in optical sensing. It uses near-infrared light(600-900nm) as instrument and utilize fluorochrome as probe to take noncontact three-dimensional imaging for live molecular targets and to exhibit molecular process in vivo. In order to solve the problem of forward simulation in FMT, this paper mainly introduces a new simulation modeling. The modeling utilizes Monte Carlo method and is implemented in C++ programming language. Ultimately its accuracy has been testified by comparing with analytic solutions and MOSE from University of Iowa and Chinese Academy of Science. The main characters of the modeling are that it can simulate both of bioluminescent imaging and FMT and take analytic calculation and support more than one source and CCD detector simultaneously. It can generate sufficient and proper data and pre-preparation for the study of fluorescence molecular tomography.

Zhang, Yizhai; Xu, Qiong; Li, Jin; Tang, Shaojie; Zhang, Xin

2008-12-01

285

Markov chain Monte Carlo methods for assigning larvae to natal sites using natural geochemical tags.

Geochemical signatures deposited in otoliths are a potentially powerful means of identifying the origin and dispersal history of fish. However, current analytical methods for assigning natal origins of fish in mixed-stock analyses require knowledge of the number of potential sources and their characteristic geochemical signatures. Such baseline data are difficult or impossible to obtain for many species. A new approach to this problem can be found in iterative Markov Chain Monte Carlo (MCMC) algorithms that simultaneously estimate population parameters and assign individuals to groups. MCMC procedures only require an estimate of the number of source populations, and post hoc model selection based on the deviance information criterion can be used to infer the correct number of chemically distinct sources. We describe the basics of the MCMC approach and outline the specific decisions required when implementing the technique with otolith geochemical data. We also illustrate the use of the MCMC approach on simulated data and empirical geochemical signatures in otoliths from young-of-the-year and adult weakfish, Cynoscion regalis, from the U.S. Atlantic coast. While we describe how investigators can use MCMC to complement existing analytical tools for use with otolith geochemical data, the MCMC approach is suitable for any mixed-stock problem with a continuous, multivariate data. PMID:19263887

White, J Wilson; Standish, Julie D; Thorrold, Simon R; Warner, Robert R

2008-12-01

286

Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.

Hall, Howard L [ORNL

2012-01-01

287

Yet another application of the Monte Carlo method for modeling in the field of biomedicine.

By means of Monte Carlo simulations performed in the C programming language, an example of scientific programming for the generation of pseudorandom numbers relevant to both teaching and research in the field of biomedicine is presented. The relatively simple algorithm proposed makes possible the statistical analysis of sequences of random numbers. The following three generators of pseudorandom numbers were used: the rand function contained in the stdlib.h library of the C programming language, Marsaglia's generator, and a chaotic function. The statistical properties of the sequences generated were compared, identical parameter values being adopted for this purpose. The properties of two estimators in finite samples of the pseudorandom numbers were also evaluated and, under suitable conditions, both the maximum-likelihood and method of moments proved to be good estimators. The findings demonstrated that the proposed algorithm appears to be suitable for the analysis of data from random experiments, indicating that it has a large variety of possible applications in the clinical practice. PMID:15899307

Cassia-Moura, R; Sousa, C S; Ramos, A D; Coelho, L C B B; Valenēa, M M

2005-04-08

288

In calculating neutron-transfer problems by the Monte Carlo method, the multlgroup approximation or the subgroup method is used to describe the energy dependence of the cross sections, as a rule. In the group approach the micro cross sections ai, where i is the isotropic number, are constant within the limits of the group. The macro cross sections are uniquely determined

P. A. Androsenko; T. A. Artem'eva

1987-01-01

289

Tracer diffusion in an ordered alloy: application of the path probability and Monte Carlo methods

Tracer diffusion technique has been extensively utilized to investigate diffusion phenomena and has contributed a great deal to the understanding of the phenomena. However, except for self diffusion and impurity diffusion, the meaning of tracer diffusion is not yet satisfactorily understood. Here we try to extend the understanding to concentrated alloys. Our major interest here is directed towards understanding the physical factors which control diffusion through the comparison of results obtained by the Path Probability Method (PPM) and those by the Monte Carlo simulation method (MCSM). Both the PPM and the MCSM are basically in the same category of statistical mechanical approaches applicable to random processes. The advantage of the Path Probability method in dealing with phenomena which occur in crystalline systems has been well established. However, the approximations which are inevitably introduced to make the analytical treatment tractable, although their meaning may be well-established in equilibrium statistical mechanics, sometimes introduce unwarranted consequences the origin of which is often hard to trace. On the other hand, the MCSM which can be carried out in a parallel fashion to the PPM provides, with care, numerically exact results. Thus a side-by-side comparison can give insight into the effect of approximations in the PPM. It was found that in the pair approximation of the CVM, the distribution in the completely random state is regarded as homogeneous (without fluctuations), and hence, the fluctuation in distribution is not well represented in the PPM. These examples thus show clearly how the comparison of analytical results with carefully carried out calculations by the MCSM guides the progress of theoretical treatments and gives insights into the mechanism of diffusion.

Sato, Hiroshi; Akbar, S.A.; Murch, G.E.

1984-01-01

290

MC-Net: a method for the construction of phylogenetic networks based on the Monte-Carlo method

Background A phylogenetic network is a generalization of phylogenetic trees that allows the representation of conflicting signals or alternative evolutionary histories in a single diagram. There are several methods for constructing these networks. Some of these methods are based on distances among taxa. In practice, the methods which are based on distance perform faster in comparison with other methods. The Neighbor-Net (N-Net) is a distance-based method. The N-Net produces a circular ordering from a distance matrix, then constructs a collection of weighted splits using circular ordering. The SplitsTree which is a program using these weighted splits makes a phylogenetic network. In general, finding an optimal circular ordering is an NP-hard problem. The N-Net is a heuristic algorithm to find the optimal circular ordering which is based on neighbor-joining algorithm. Results In this paper, we present a heuristic algorithm to find an optimal circular ordering based on the Monte-Carlo method, called MC-Net algorithm. In order to show that MC-Net performs better than N-Net, we apply both algorithms on different data sets. Then we draw phylogenetic networks corresponding to outputs of these algorithms using SplitsTree and compare the results. Conclusions We find that the circular ordering produced by the MC-Net is closer to optimal circular ordering than the N-Net. Furthermore, the networks corresponding to outputs of MC-Net made by SplitsTree are simpler than N-Net.

2010-01-01

291

NASA Astrophysics Data System (ADS)

Portal monitoring radiation detectors are commonly used by steel industries in the probing and detection of radioactivity contamination in scrap metal. These portal monitors typically consist of polystyrene or polyvinyltoluene (PVT) plastic scintillating detectors, one or more photomultiplier tubes (PMT), an electronic circuit, a controller that handles data output and manipulation linking the system to a display or a computer with appropriate software and usually, a light guide. Such a portal used by the steel industry was opened and all principal materials were simulated using a Monte Carlo simulation tool (MCNP4C2). Various source-detector configurations were simulated and validated by comparison with corresponding measurements. Subsequently an experiment with a uniform cargo along with two sets of experiments with different scrap loads and radioactive sources (137Cs, 152Eu) were performed and simulated. Simulated and measured results suggested that the nature of scrap is crucial when simulating scrap load-detector experiments. Using the same simulating configuration, a series of runs were performed in order to estimate minimum alarm activities for 137Cs, 60Co and 192Ir sources for various simulated scrap densities. The minimum alarm activities as well as the positions in which they were recorded are presented and discussed.

Takoudis, G.; Xanthos, S.; Clouvas, A.; Potiriadis, C.

2010-02-01

292

Dynamical estimation of neuron and network properties II: Path integral Monte Carlo methods.

Hodgkin-Huxley (HH) models of neuronal membrane dynamics consist of a set of nonlinear differential equations that describe the time-varying conductance of various ion channels. Using observations of voltage alone we show how to estimate the unknown parameters and unobserved state variables of an HH model in the expected circumstance that the measurements are noisy, the model has errors, and the state of the neuron is not known when observations commence. The joint probability distribution of the observed membrane voltage and the unobserved state variables and parameters of these models is a path integral through the model state space. The solution to this integral allows estimation of the parameters and thus a characterization of many biological properties of interest, including channel complement and density, that give rise to a neuron's electrophysiological behavior. This paper describes a method for directly evaluating the path integral using a Monte Carlo numerical approach. This provides estimates not only of the expected values of model parameters but also of their posterior uncertainty. Using test data simulated from neuronal models comprising several common channels, we show that short (<50 ms) intracellular recordings from neurons stimulated with a complex time-varying current yield accurate and precise estimates of the model parameters as well as accurate predictions of the future behavior of the neuron. We also show that this method is robust to errors in model specification, supporting model development for biological preparations in which the channel expression and other biophysical properties of the neurons are not fully known. PMID:22526358

Kostuk, Mark; Toth, Bryan A; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

2012-04-13

293

NASA Astrophysics Data System (ADS)

Our study aim to design a useful neutron signature characterization device based on 3He detectors, a standard neutron detection methodology used in homeland security applications. Research work involved simulation of the generation, transport, and detection of the leakage radiation from Special Nuclear Materials (SNM). To accomplish research goals, we use a new methodology to fully characterize a standard "1-Ci" Plutonium-Beryllium (Pu-Be) neutron source based on 3-D computational radiation transport methods, employing both deterministic SN and Monte Carlo methodologies. Computational model findings were subsequently validated through experimental measurements. Achieved results allowed us to design, build, and laboratory-test a Nickel composite alloy shield that enables the neutron leakage spectrum from a standard Pu-Be source to be transformed, through neutron scattering interactions in the shield, into a very close approximation of the neutron spectrum leaking from a large, subcritical mass of Weapons Grade Plutonium (WGPu) metal. This source will make possible testing with a nearly exact reproduction of the neutron spectrum from a 6.67 kg WGPu mass equivalent, but without the expense or risk of testing detector components with real materials. Moreover, over thirty moderator materials were studied in order to characterize their neutron energy filtering potential. Specific focus was made to establish the limits of He-3 spectroscopy using ideal filter materials. To demonstrate our methodology, we present the optimally detected spectral differences between SNM materials (Plutonium and Uranium), metal and oxide, using ideal filter materials. Finally, using knowledge gained from previous studies, the design of a He-3 spectroscopy system neutron detector, simulated entirely via computational methods, is proposed to resolve the spectra from SNM neutron sources of high interest. This was accomplished by replacing ideal filters with real materials, and comparing reaction rates with similar data from the ideal material suite.

Ghita, Gabriel M.

294

Modeling radiation from the atmosphere of Io with Monte Carlo methods

NASA Astrophysics Data System (ADS)

Conflicting observations regarding the dominance of either sublimation or volcanism as the source of the atmosphere on Io and disparate reports on the extent of its spatial distribution and the absolute column abundance invite the development of detailed computational models capable of improving our understanding of Io's unique atmospheric structure and origin. To validate a global numerical model of Io's atmosphere against astronomical observations requires a 3-D spherical-shell radiative transfer (RT) code to simulate disk-resolved images and disk-integrated spectra from the ultraviolet to the infrared spectral region. In addition, comparison of simulated and astronomical observations provides important information to improve existing atmospheric models. In order to achieve this goal, a new 3-D spherical-shell forward/backward photon Monte Carlo code capable of simulating radiation from absorbing/emitting and scattering atmospheres with an underlying emitting and reflecting surface was developed. A new implementation of calculating atmospheric brightness in scattered sunlight is presented utilizing the notion of an "effective emission source" function. This allows for the accumulation of the scattered contribution along the entire path of a ray and the calculation of the atmospheric radiation when both scattered sunlight and thermal emission contribute to the observed radiation---which was not possible in previous models. A "polychromatic" algorithm was developed for application with the backward Monte Carlo method and was implemented in the code. It allows one to calculate radiative intensity at several wavelengths simultaneously, even when the scattering properties of the atmosphere are a function of wavelength. The application of the "polychromatic" method improves the computational efficiency because it reduces the number of photon bundles traced during the simulation. A 3-D gas dynamics model of Io's atmosphere, including both sublimation and volcanic sources of SO2 gas, is analyzed by simulating spectra and images from the model corresponding to three important observations: (1) simulations of the mid-IR disk-averaged observations of Io's sunlit hemisphere at 19 mum, obtained with TEXES during 2001-2004; (2) simulations of disk-resolved images at Lyman-a (1216 A or 0.1216 mum) obtained with the Hubble Space Telescope (HST) Space Telescope Imaging Spectrograph (STIS) during 1997-2001; and (3) disk-integrated simulations of emission line profiles in the millimeter wavelength range (1.2-1.4 mm) obtained with the IRAM-30 m telescope in Oct.-Nov. 1999. We found that our atmospheric model generally reproduces the longitudinal variation in the strength of absorption band from the mid-IR data; however, the best match is obtained when the simulation results are shifted 30° toward lower longitudes. The simulations of Lyman-alpha images do not show the mid-to-high latitude bright patches seen in the observations, suggesting that the model atmosphere predicts column number densities that are too high at those latitudes. The simulations of emission line profiles in the millimeter spectral region support the hypothesis that the atmospheric dynamics favorably explain the observed line widths, which are too wide to be formed by Doppler broadening alone. The computational modeling and simulation tools needed to study light scattering from volcanic plumes, which play a significant role in structuring the atmosphere on Io, is developed. The radiative transfer code is applied to the simulation of the brightness in scattered sunlight from a Prometheus-type plume on Io, as observed in limb-viewing geometry by the Galileo Solid State Imager (SSI). The computations are performed utilizing the "polychromatic" method, thus calculating the plume brightness for the entire filter bandpass in a single simulation. Such simulations account for multiple scattering and reflection of sunlight from the surface, not included in previous studies, and may serve as a powerful tool for simulating the plume obs

Gratiy, Sergey

295

Simulating the proton transfer in gramicidin A by a sequential dynamical Monte Carlo method.

The large interest in long-range proton transfer in biomolecules is triggered by its importance for many biochemical processes such as biological energy transduction and drug detoxification. Since long-range proton transfer occurs on a microsecond time scale, simulating this process on a molecular level is still a challenging task and not possible with standard simulation methods. In general, the dynamics of a reactive system can be described by a master equation. A natural way to describe long-range charge transfer in biomolecules is to decompose the process into elementary steps which are transitions between microstates. Each microstate has a defined protonation pattern. Although such a master equation can in principle be solved analytically, it is often too demanding to solve this equation because of the large number of microstates. In this paper, we describe a new method which solves the master equation by a sequential dynamical Monte Carlo algorithm. Starting from one microstate, the evolution of the system is simulated as a stochastic process. The energetic parameters required for these simulations are determined by continuum electrostatic calculations. We apply this method to simulate the proton transfer through gramicidin A, a transmembrane proton channel, in dependence on the applied membrane potential and the pH value of the solution. As elementary steps in our reaction, we consider proton uptake and release, proton transfer along a hydrogen bond, and rotations of water molecules that constitute a proton wire through the channel. A simulation of 8 mus length took about 5 min on an Intel Pentium 4 CPU with 3.2 GHz. We obtained good agreement with experimental data for the proton flux through gramicidin A over a wide range of pH values and membrane potentials. We find that proton desolvation as well as water rotations are equally important for the proton transfer through gramicidin A at physiological membrane potentials. Our method allows to simulate long-range charge transfer in biological systems at time scales, which are not accessible by other methods. PMID:18826179

Till, Mirco S; Essigke, Timm; Becker, Torsten; Ullmann, G Matthias

2008-09-30

296

Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods

NASA Astrophysics Data System (ADS)

During extreme-Mach number reentry into Earth's atmosphere, spacecraft experience hypersonic non-equilibrium flow conditions that dissociate molecules and ionize atoms. Such situations occur behind a shock wave leading to high temperatures, which have an adverse effect on the thermal protection system and radar communications. Since the electronic energy levels of gaseous species are strongly excited for high Mach number conditions, the radiative contribution to the total heat load can be significant. In addition, radiative heat source within the shock layer may affect the internal energy distribution of dissociated and weakly ionized gas species and the number density of ablative species released from the surface of vehicles. Due to the radiation total heat load to the heat shield surface of the vehicle may be altered beyond mission tolerances. Therefore, in the design process of spacecrafts the effect of radiation must be considered and radiation analyses coupled with flow solvers have to be implemented to improve the reliability during the vehicle design stage. To perform the first stage for radiation analyses coupled with gas-dynamics, efficient databasing schemes for emission and absorption coefficients were developed to model radiation from hypersonic, non-equilibrium flows. For bound-bound transitions, spectral information including the line-center wavelength and assembled parameters for efficient calculations of emission and absorption coefficients are stored for typical air plasma species. Since the flow is non-equilibrium, a rate equation approach including both collisional and radiatively induced transitions was used to calculate the electronic state populations, assuming quasi-steady-state (QSS). The Voigt line shape function was assumed for modeling the line broadening effect. The accuracy and efficiency of the databasing scheme was examined by comparing results of the databasing scheme with those of NEQAIR for the Stardust flowfield. An accuracy of approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking into account non-local radiation. The definition of the escape factor which is depende

Sohn, Ilyoup

297

NASA Astrophysics Data System (ADS)

Two of the primary challenges associated with the neutronic analysis of the Very High Temperature Reactor (VHTR) are accounting for resonance self-shielding in the particle fuel (contributing to the double heterogeneity) and accounting for temperature feedback due to Doppler broadening. The double heterogeneity challenge is addressed by defining a "double heterogeneity factor" (DHF) that allows conventional light water reactor (LWR) lattice physics codes to analyze VHTR configurations. The challenge of treating Doppler broadening is addressed by a new "on-the-fly" methodology that is applied during the random walk process with negligible impact on computational efficiency. Although this methodology was motivated by the need to treat temperature feedback in a VHTR, it is applicable to any reactor design. The on-the-fly Doppler methodology is based on a combination of Taylor and asymptotic series expansions. The type of series representation was determined by investigating the temperature dependence of U238 resonance cross sections in three regions: near the resonance peaks, mid-resonance, and the resonance wings. The coefficients for these series expansions were determined by regressions over the energy and temperature range of interest. The comparison of the broadened cross sections using this methodology with the NJOY cross sections was excellent. A Monte Carlo code was implemented to apply the combined regression model and used to estimate the additional computing cost which was found to be less than 1%. The DHF accounts for the effect of the particle heterogeneity on resonance absorption in particle fuel. The first level heterogeneity posed by the VHTR fuel particles is a unique characteristic that cannot be accounted for by conventional LWR lattice physics codes. On the other hand, Monte Carlo codes can take into account the detailed geometry of the VHTR including resolution of individual fuel particles without performing any type of resonance approximation. The DHF, basically a self shielding factor, was found to be weakly dependent on space and fuel depletion. The DHF only depends strongly on the packing fraction in a fuel compact. Therefore, it is proposed that DHFs be tabulated as a function of packing fraction to analyze the heterogeneous fuel in VHTR configuration with LWR lattice physics codes.

Yesilyurt, Gokhan

298

During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.

Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Bin; Wang, Lin; Peng, Kuan; Liang, Jimin; Tian, Jie

2010-01-01

299

NASA Astrophysics Data System (ADS)

A new reverse Monte Carlo (RMC) method has been developed for creating three-dimensional structures in agreement with small angle scattering data. Extensive tests, using computer generated quasi-experimental data for aggregation processes via constrained RMC and Langevin molecular dynamics, were performed. The software is capable of fitting several consecutive time frames of scattering data, and movie-like visualization of the structure (and its evolution) either during or after the simulation is also possible.

Gereben, O.; Pusztai, L.; McGreevy, R. L.

2010-10-01

300

The characteristics of rf planar magnetron discharges of O2\\/Ar mixture are clarified using the Particle-in-Cell\\/Monte Carlo (PIC\\/MC) method. The simulation is carried out for axisymmetrical magnetic fields. The spatial and temporal behavior of magnetron discharge is examined in detail. The total pressure and temperature of mixture gas are 5mTorr and 323K, respectively The voltage amplitude is 200V. The magnetization in

S. Yonemura; K. Nanbu

2003-01-01

301

Isomerization of bicyclo[1.1.0]butane by means of the diffusion quantum Monte Carlo method.

The isomerization of bicyclo[1.1.0]butane which comprises species with different multireference character is studied by means of the diffusion quantum Monte Carlo method (DMC). Accurate multireference DMC calculations are presented. It can be shown that at most three configuration state functions are required to achieve a balanced description of dynamical and nondynamical electron correlation. A general scheme is described that promotes efficient error cancellation in multireference DMC calculations. PMID:21121681

Berner, Raphael; Lüchow, Arne

2010-12-01

302

Accelerated kinetics of amorphous silicon using an on-the-fly off-lattice kinetic Monte-Carlo method

The time evolution of a series of well relaxed amorphous silicon models was simulated using the kinetic Activation-RelaxationTechnique (kART), an on-the-fly off-lattice kinetic Monte Carlo method [1]. This novel algorithm uses the ART nouveau algorithm to generate activated events and links them with local topologies. It was shown to work well for crystals with few defects but this is the

Jean-Francois Joly; Fedwa El-Mellouhi; Laurent Karim Beland; Normand Mousseau

2011-01-01

303

NASA Astrophysics Data System (ADS)

We present a model-order reduction technique that overcomes the computational burden associated with the application of Monte Carlo methods to the solution of the groundwater flow equation with random hydraulic conductivity. The method is based on the Galerkin projection of the high-dimensional model equations onto a subspace, approximated by a small number of pseudo-optimally chosen basis functions (principal components). To obtain an efficient reduced-order model, we develop an offline algorithm for the computation of the parameter-independent principal components. Our algorithm combines a greedy algorithm for the snapshot selection in the parameter space and an optimal distribution of the snapshots in time. Moreover, we introduce a residual-based estimation of the error associated with the reduced model. This estimation allows a considerable reduction of the number of full system model solutions required for the computation of principal components. We demonstrate the robustness of our methodology by way of numerical examples, comparing the empirical statistics of the ensemble of the numerical solutions obtained using the traditional Monte Carlo method and our reduced model. The numerical results show that our methodology significantly reduces the computational requirements (CPU time and storage) for the solution of the Monte Carlo simulation, ensuring a good approximation of the mean and variance of the head. The analysis of the empirical probability density functions at the observation wells suggests that our reduced model produces good results and is most accurate in the regions with large drawdown.

Pasetto, Damiano; Putti, Mario; Yeh, William W.-G.

2013-06-01

304

Bayesian Inference and Markov Chain Monte Carlo Methods Applied to Streamflow Forecasting

In this work we propose a Bayesian approach for the parameter estimation problem of stochastic autoregressive models of order p, AR(p), applied to the streamflow forecasting p roblem. Procedures for model selection, forecasting and robustness evaluation through Monte Carlo Markov Chain (MCMC) simulation techniques are also presented. The proposed approach is compared with the classical one by Box-Jenkins (maximum likelihood

Guilherme de A. Barreto; Marinho G. de Andrade

2000-01-01

305

An Evaluation of a Markov Chain Monte Carlo Method for the Two-Parameter Logistic Model.

ERIC Educational Resources Information Center

|The accuracy of the Markov Chain Monte Carlo (MCMC) procedure Gibbs sampling was considered for estimation of item parameters of the two-parameter logistic model. Data for the Law School Admission Test (LSAT) Section 6 were analyzed to illustrate the MCMC procedure. In addition, simulated data sets were analyzed using the MCMC, marginal Bayesian

Kim, Seock-Ho; Cohen, Allan S.

306

Auxiliary Variable Methods for Markov Chain Monte Carlo withApplications

Suppose that one wishes to sample from the density ?(x) using Markov chain Monte Carlo (MCMC). An auxiliary variable u and its conditional distribution ?(u|x) can be defined, giving the joint distribution ?(x, u) = ?(x)?(u|x). A MCMC scheme that samples over this joint distribution can lead to substantial gains in efficiency compared to standard approaches. The revolutionary algorithm of

David M. Higdon

1998-01-01

307

Application of Monte Carlo Method to Phase Separation Dynamics of Complex Systems

We report the application of the Monte Carlo simulation to phase separation dynamics. First, we deal with the phase separation under shear flow. The thermal effect on the phase separation is discussed, and the anisotropic growth exponents in the late stage are estimated. Next, we study the effect of surfactants on the three-component solvents. We obtain the mixture of macrophase

Yutaka Okabe; Tsukasa Miyajima; Toshiro Ito; Toshihiro Kawakatsu

1999-01-01

308

Quantum Monte Carlo for transition metal systems: Method developments and applications

Quantum Monte Carlo (QMC) is a powerful computational tool to study correlated systems of electrons, allowing us to explicitly treat many-body interactions with favorable scaling in the number of particles. It has been regarded as a benchmark tool for condensed matter systems containing elements from the first and second row of the periodic table. It holds particular promise for the

Lucas K. Wagner

2006-01-01

309

A Monte Carlo method for quantum spins using boson world lines

NASA Astrophysics Data System (ADS)

A new Monte Carlo method is described for quantum ( s= 1/2) spins which maps the spin model onto a model of hard-core bosons. The Hamiltonian is then broken up into kinetic and potential parts and the Trotter formula used to simulate the Bose system. The power of this mapping comes from the fact that, by letting the system evolve through unphysical spin states between imaginary time slices, the needed matrix elements have simple expressions. Specifically, for quantum Hamiltonians H = - sumlimits_{i,j} {J_{i,j} (S_i^x S_j^x + S_i^y S_j^y ) + H_1 } with H 1 diagonal in the z components of the spins, we map the spins onto boson operators using S += Ssux+iSy? ? , S -=Sx-iSy??, and S z ? n-1/2, with n=? ? Unfortunately, the mapping is not quite exact since one may create an unlimited number of bosons on any site while there are only two spin states per site. A large number of analytic calculations have been performed successfully by suppressing the unphysical states n > 1 either energetically, as in ferromagnetic spin-wave calculations, or formally, as in Holstein-Primakoff or Dyson-Maleev transformations. In numerical simulations, however, one may do something much more simple; a term HC may be introduced into the Hamiltonian which projects out unphysical states. Formally, HC is zero for allowed states but infinite otherwise; alternatively, P HC =exp(- ? · HC) projects out states for any ? > 0, giving unity for physical states but zero for unphysical ones. In practice, the analytic form of HC is of no concern since it may be "hard-wired" into the algorithm to allow only hard-core states. The Hamiltonian becomes begin{gathered} H = - 1/2sumlimits_{i,j} {J_{i,j} (S_i^ + S_j^ - + S_i^ - S_j^ + ) + H_I } \\ to - sumlimits_{< i,j > } {t_{i,j} } (? _i^dag ? _j + ? _i^dag ? _i ) + H_I + HC \\ = T + H_I + HC \\ where thet_{i,j} = tfrac{1}{2}J_{i,j} are the hopping coefficients for the bosons. The Trotter formula may now be used to approximate matrix elements: e -?· H =( e -??· H ) L , where ?=L·?? and e -??· H ? e -??· T e -??· HI e -??· HC . Since we will use an occupation-number representation, H I , will be diagonal and easily exponentiated. The hard-core projection operator P HC , again, will act only through the Monte Carlo moves which prohibit unphysical states. Only the exponential of the kinetic term remains to be evaluated. The factor exp(- ?? · T) evolves the bosons between time slices as indistinguishable, free particles-without concern for physical/unphysical states nor for interactions among the particlesand so is easily expressed as a sum of products of single-particle propagators, as developed in any elementary quantum-mechanics text. The final picture, then, is one in which the indistinguishable particles evolve through imaginary time as though they are free, with interactions and the physical-state projector operating only every ??. Sampling over permutations of world lines and performing the calculation on a lattice produces a relatively fast algorithm. The simulation generates configurations which are described by L lattice configurations on the various imaginary-time slices as well as the permutations that assign world lines from bosoms on one time slice to those on the next. Monte Carlo moves must both wiggle world lines around as well as reassign them. Any operator which is diagonal in the occupation-number representation may be measured in this simulation, even if it is a product of operators at different times. In addition, any operator, or product of such operators at different time slices, that conserves the boson number at each time slice can be measured. The resulting set of observables includes the energy, specific heat, and the parallel and transverse magnetizations and susceptibilities. To test the breakup, particularly the validiy of separating out HC with its infinite matrix elements, exact numerical calculations were carried out for small systems. It was found that in measuring the average value of an operator A which has s

Loh, E.

1986-06-01

310

Monte Carlo calculated TG-43 dosimetry parameters for the SeedLink 125Iodine brachytherapy system.

The SeedLink brachytherapy system is comprised from an assembly of I-Plant 3500 interstitial brachytherapy seeds and bioresorbable spacers joined together by a 6-mm-long titanium sleeve centered over each seed. This device is designed to maintain specified spacing between seeds during treatment thereby decreasing implant preparation time and reducing radionuclide migration within the prostate and periprostatic region. Reliable clinical treatment and planning applications necessitate accurate dosimetric data for source evaluation, therefore the authors report the results of a Monte Carlo study designed to calculate the AAPM Task Group Report No. 43 dosimetric parameters for the SeedLink brachytherapy source and compare these values against previously published Monte Carlo study results of the I-Plant 3500 brachytherapy seed. For this investigation, a total of 1 x 10(9) source photon histories were processed for each set of in-water and in-air calculations using the MCNP4C2 Monte Carlo radiation transport code (RSICC). Statistically, the radial dose function, g(r), and the dose-rate constant, lambda, were identical to the values calculated previously for the Model 3500 with the dose-rate constant evaluated to be lambda = 1.000+/-0.026 cGyh(-1) U(-1). The titanium sleeve used in SeedLink to bind together Model 3500 seeds and spacers resulted in slightly greater dosimetric anisotropy as exhibited in the anisotropy function, F(r, theta), the anisotropy factor, phi(an) (r), and the anisotropy constant, phi(an), which was calculated to be phi(an) = 0.91 +/- 0.01, or roughly 2% lower than the value calculated previously for the Model 3500. These results indicate that the radiological characteristics of the SeedLink dosimetry system are comparable to those obtained for previously characterized single seeds such as the Implant Sciences Model 3500 I-Plant seed. PMID:14528972

Medich, David C; Munro, John J

2003-09-01

311

NASA Astrophysics Data System (ADS)

This report summarizes the work done during the course of an ARO Summer Faculty grant at the Ballistic Research Laboratory. The technical details will be reported elsewhere. The current state of knowledge of the structural properties of hydroxylammonium nitrate (HAN) is summarized. The possibility of determining these properties by computer simulation is discussed. The features of the Monte Carlo and Molecular Dynamics approaches to computer simulation are briefly reviewed; the former method has more promise for the present problem, and its application is explored. A formula not previously found in the literature was derived for the heat capacity at constant pressure for a system at given density and temperature from a single Monte Carlo run.

Murphy, R. D.

1984-10-01

312

NASA Astrophysics Data System (ADS)

In this paper the use of the Filtered Back Projection (FBP) Algorithm, in order to reconstruct tomographic images using the high energy (200-250 MeV) proton beams, is investigated. The algorithm has been studied in detail with a Monte Carlo approach and image quality has been analysed and compared with the total absorbed dose. A proton Computed Tomography (pCT) apparatus, developed by our group, has been fully simulated to exploit the power of the Geant4 Monte Carlo toolkit. From the simulation of the apparatus, a set of tomographic images of a test phantom has been reconstructed using the FBP at different absorbed dose values. The images have been evaluated in terms of homogeneity, noise, contrast, spatial and density resolution.

Cirrone, G. A. P.; Bucciolini, M.; Bruzzi, M.; Candiano, G.; Civinini, C.; Cuttone, G.; Guarino, P.; Lo Presti, D.; Mazzaglia, S. E.; Pallotta, S.; Randazzo, N.; Sipala, V.; Stancampiano, C.; Talamonti, C.

2011-12-01

313

The Simulation of Effective Dose of Human Body from External Exposure by Monte Carlo Methods

In practical dose estimation work of radiation protection, organ absorbed dose and effective dose are immeasurable. To acquire effective dose of human body, this paper is focused on Monte Carlo calculation with two monoenergic photons--ADAM phantom and EVE phantom, 0.3MeV and 1.0MeV photon beams, and irradiation from six different angles respectively-AP irradiation geometry, PA irradiation geometry, LAT irradiation geometry, overhead

Xiaobin Tang; Changran Geng; Feida Chen; Yunpeng Liu; Da Chen; Qin Xie

2011-01-01

314

Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method

Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO\\/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the

M. Moralles; C. C. Guimarćes; E. Okuno

2005-01-01

315

This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

Brown, F.B.; Sutton, T.M.

1996-02-01

316

The thermal neutron fluence rate is determined by the gold activation method. The absolute activity of the irradiated gold foil is measured by a 4??-? coincidence counter. Using this method, corrections for the detection of conversion electrons and gamma rays by a 4?? counter are very important to obtain accurate absolute activity. In this work, Monte Carlo simulations were performed to derive the correction factor K. The absolute measurement of (198)Au activity for 20-100 ?m thickness Au foils were performed to verify the calculating model of the 4??-? coincidence counting system. PMID:21406431

Nishiyama, Jun; Harano, Hideki; Matsumoto, Tetsuro; Sato, Yasushi; Uritani, Akira; Kudo, Katsuhisa

2011-03-15

317

Direct simulation Monte Carlo method for the Uehling-Uhlenbeck-Boltzmann equation.

In this paper we describe a direct simulation Monte Carlo algorithm for the Uehling-Uhlenbeck-Boltzmann equation in terms of Markov processes. This provides a unifying framework for both the classical Boltzmann case as well as the Fermi-Dirac and Bose-Einstein cases. We establish the foundation of the algorithm by demonstrating its link to the kinetic equation. By numerical experiments we study its sensitivity to the number of simulation particles and to the discretization of the velocity space, when approximating the steady-state distribution. PMID:14682907

Garcia, Alejandro L; Wagner, Wolfgang

2003-11-13

318

Monte Carlo methods of coupled neutron/photon transport are being used in the design of filtered beams for Neutron Capture Therapy (NCT). This method of beam analysis provides segregation of each individual dose component, and thereby facilitates beam optimization. The Monte Carlo method is discussed in some detail in relation to NCT epithermal beam design. Ideal neutron beams (i.e., plane-wave monoenergetic neutron beams with no primary gamma-ray contamination) have been modeled both for comparison and to establish target conditions for a practical NCT epithermal beam design. Detailed models of the 5 MWt Massachusetts Institute of Technology Research Reactor (MITR-II) together with a polyethylene head phantom have been used to characterize approximately 100 beam filter and moderator configurations. Using the Monte Carlo methodology of beam design and benchmarking/calibrating our computations with measurements, has resulted in an epithermal beam design which is useful for therapy of deep-seated brain tumors. This beam is predicted to be capable of delivering a dose of 2000 RBE-cGy (cJ/kg) to a therapeutic advantage depth of 5.7 cm in polyethylene assuming 30 micrograms/g 10B in tumor with a ten-to-one tumor-to-blood ratio, and a beam diameter of 18.4 cm. The advantage ratio (AR) is predicted to be 2.2 with a total irradiation time of approximately 80 minutes. Further optimization work on the MITR-II epithermal beams is expected to improve the available beams. PMID:2268248

Clement, S D; Choi, J R; Zamenhof, R G; Yanch, J C; Harling, O K

1990-01-01

319

Monte Carlo methods of coupled neutron/photon transport are being used in the design of filtered beams for Neutron Capture Therapy (NCT). This method of beam analysis provides segregation of each individual dose component, and thereby facilitates beam optimization. The Monte Carlo method is discussed in some detail in relation to NCT epithermal beam design. Ideal neutron beams (i.e., plane-wave monoenergetic neutron beams with no primary gamma-ray contamination) have been modeled both for comparison and to establish target conditions for a practical NCT epithermal beam design. Detailed models of the 5 MWt Massachusetts Institute of Technology Research Reactor (MITR-II) together with a polyethylene head phantom have been used to characterize approximately 100 beam filter and moderator configurations. Using the Monte Carlo methodology of beam design and benchmarking/calibrating our computations with measurements, has resulted in an epithermal beam design which is useful for therapy of deep-seated brain tumors. This beam is predicted to be capable of delivering a dose of 2000 RBE-cGy (cJ/kg) to a therapeutic advantage depth of 5.7 cm in polyethylene assuming 30 micrograms/g 10B in tumor with a ten-to-one tumor-to-blood ratio, and a beam diameter of 18.4 cm. The advantage ratio (AR) is predicted to be 2.2 with a total irradiation time of approximately 80 minutes. Further optimization work on the MITR-II epithermal beams is expected to improve the available beams. 20 references.

Clement, S.D.; Choi, J.R.; Zamenhof, R.G.; Yanch, J.C.; Harling, O.K. (Massachusetts Institute of Technology, Cambridge (USA))

1990-01-01

320

A Monte Carlo method was derived from the optical scattering properties of spheroidal particles and used for modeling diffuse photon migration in biological tissue. The spheroidal scattering solution used a separation of variables approach and numerical calculation of the light intensity as a function of the scattering angle. A Monte Carlo algorithm was then developed which utilized the scattering solution to determine successive photon trajectories in a three-dimensional simulation of optical diffusion and resultant scattering intensities in virtual tissue. Monte Carlo simulations using isotropic randomization, Henyey-Greenstein phase functions, and spherical Mie scattering were additionally developed and used for comparison to the spheroidal method. Intensity profiles extracted from diffusion simulations showed that the four models differed significantly. The depth of scattering extinction varied widely among the four models, with the isotropic, spherical, spheroidal, and phase function models displaying total extinction at depths of 3.62, 2.83, 3.28, and 1.95 cm, respectively. The results suggest that advanced scattering simulations could be used as a diagnostic tool by distinguishing specific cellular structures in the diffused signal. For example, simulations could be used to detect large concentrations of deformed cell nuclei indicative of early stage cancer. The presented technique is proposed to be a more physical description of photon migration than existing phase function methods. This is attributed to the spheroidal structure of highly scattering mitochondria and elongation of the cell nucleus, which occurs in the initial phases of certain cancers. The potential applications of the model and its importance to diffusive imaging techniques are discussed. PMID:24085080

Hart, Vern P; Doyle, Timothy E

2013-09-01

321

A Monte Carlo computer simulation method is presented for directly performing property predictions for fluid systems at fixed total internal energy, U, or enthalpy, H, using a molecular-level system model. The method is applicable to both nonreacting and reacting systems. Potential applications are to (1) adiabatic flash (Joule-Thomson expansion) calculations for nonreacting pure fluids and mixtures at fixed (H,P), where P is the pressure; and (2) adiabatic (flame-temperature) calculations at fixed (U,V) or (H,P), where V is the system volume. The details of the method are presented. The method is compared with existing related simulation methodologies for nonreacting systems, one of which addresses the problem involving fixing portions of U or of H, and one of which solves the problem at fixed H considered here by means of an indirect approach. We illustrate the method by an adiabatic calculation involving the ammonia synthesis reaction. PMID:12241338

Smith, William R; Lķsal, Martin

2002-07-18

322

Suggestions about determining the concentration of 10B in blood via the thermal neutron flux depression measurement (NFDM) are made. The use of a measuring set-up consisting of a 252Cf neutron source, polyethylene moderator and a slim BF3 counter surrounded by an annular sample is examined. It is shown experimentally that using 6 ml samples and the source emitting 1.4 x 10(7) neutrons s(-1), one can determine the concentration of 10B in water at the level of 10 ppm with a statistical precision of 10% in about 20 min. Monte Carlo simulations performed with the use of MCNP-4C code revealed a potential for further improvements of the NFDM technique both in respect of the sample volume and counting period. PMID:16204865

Bolewski, A; Ciechanowski, M; Dydejczyk, A; Kreft, A

2005-10-04

323

Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method

NASA Astrophysics Data System (ADS)

Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the range 20 300 kV, were obtained by simulating a X-ray Philips MG-450 tube associated with the recommended filters. A realistic photon distribution of a 60Co radiotherapy source was taken from results of Monte Carlo simulations found in the literature. Comparison between simulated and experimental results revealed that the attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account, while this effect is negligible for lithium fluoride. Differences between results obtained by heating the dosimeter from the irradiated side and from the opposite side allowed the determination of the light attenuation coefficient for CaF2:NaCl (mass proportion 60:40) as 2.2 mm-1.

Moralles, M.; Guimarćes, C. C.; Okuno, E.

2005-06-01

324

NASA Astrophysics Data System (ADS)

Monte Carlo simulated SEM images for realistic instrumental conditions are used to evaluate measurement methods for SEM image sharpness. The Monte Carlo simulation of the SEM image is based on a well-developed physical model of electron-solid interaction, which employs Mott's cross section for elastic electron scattering and dielectric functional approach to electron inelastic scattering with cascade secondary electron production included, a finite element mesh modeling of complex sample topography and a modeling of SEM instrumental conditions (i.e. focus, astigmatism, drift and vibration). A series of simulated SEM images of a realistic sample, gold particles on a carbon substrate, for different instrumental parameters are generated to represent practical images where all instrumental conditions are precisely known and controlled. An estimation of three measurement methods of SEM image sharpness, i.e. FT, CG and DR methods, has then been performed with these simulated images. The responses of image sharpness measurement methods to various instrumental conditions are studied. The calculation shows that all the three methods present similar and reasonable response to focus parameter; their dependences of the measured sharpness on astigmatism coefficient are complicated and CG method presents reasonable sharpness value. For drift and vibration, the situation is more complex because CG/DR methods can be less or more sensitive to vibration coefficient than FT method. Because of the different response behaviors of the three sharpness measurement methods to experimental parameters, we propose to use a mean, simple average or weighted average, of three sharpness values as a proper measure of sharpness.

Ruan, Z.; Mao, S. F.; Zhang, P.; Li, H. M.; Ding, Z. J.

2013-05-01

325

A new method to calculate the response of the WENDI-II rem counter using the FLUKA Monte Carlo Code

NASA Astrophysics Data System (ADS)

The FHT-762 WENDI-II is a commercially available wide range neutron rem counter which uses a 3He counter tube inside a polyethylene moderator. To increase the response above 10 MeV of kinetic neutron energy, a layer of tungsten powder is implemented into the moderator shell. For the purpose of the characterization of the response, a detailed model of the detector was developed and implemented for FLUKA Monte Carlo simulations.In common practice Monte Carlo simulations are used to calculate the neutron fluence inside the active volume of the detector. The resulting fluence is then folded offline with the reaction rate of the 3He(n,p)3H reaction to yield the proton-triton production rate. Consequently this approach does not consider geometrical effects like wall effects, where one or both reaction products leave the active volume of the detector without triggering a count.This work introduces a two-step simulation method which can be used to determine the detector's response, including geometrical effects, directly, using Monte Carlo simulations. A first step simulation identifies the 3He(n,p)3H reaction inside the active volume of the 3He counter tube and records its position. In the second step simulation the tritons and protons are started in accordance with the kinematics of the 3He(n,p)3H reaction from the previously recorded positions and a correction factor for geometrical effects is determined.The three dimensional Monte Carlo model of the detector as well as the two-step simulation method were evaluated and tested in the well-defined fields of an 241Am-Be(?,n) source as well as in the field of a 252Cf source. Results were compared with measurements performed by Gutermuth et al. [1] at GSI with an 241Am-Be(?,n) source as well as with measurements performed by the manufacturer in the field of a 252Cf source. Both simulation results show very good agreement with the respective measurements. After validating the method, the response values in terms of counts per unit fluence were calculated for 95 different incident neutron energies between 1 meV and 5 GeV.

Jägerhofer, Lukas; Feldbaumer, Eduard; Theis, Christian; Roesler, Stefan; Vincke, Helmut

2012-11-01

326

For the evaluation of gamma-ray dose rates around the duct penetrations after shutdown of nuclear fusion reactor, the calculation method is proposed with an application of the Monte Carlo neutron and decay gamma-ray transport calculation. For the radioisotope production rates during operation, the Monte Carlo calculation is conducted by the modification of the nuclear data library replacing a prompt gamma-ray

Satoshi SATO; Hiromasa IIDA; Takeo NISHITANI

2002-01-01

327

Exact ground state Monte Carlo method for Bosons without importance sampling

NASA Astrophysics Data System (ADS)

Generally ``exact'' quantum Monte Carlo computations for the ground state of many bosons make use of importance sampling. The importance sampling is based either on a guiding function or on an initial variational wave function. Here we investigate the need of importance sampling in the case of path integral ground state (PIGS) Monte Carlo. PIGS is based on a discrete imaginary time evolution of an initial wave function with a nonzero overlap with the ground state, which gives rise to a discrete path which is sampled via a Metropolis-like algorithm. In principle the exact ground state is reached in the limit of an infinite imaginary time evolution, but actual computations are based on finite time evolutions and the question is whether such computations give unbiased exact results. We have studied bulk liquid and solid 4He with PIGS by considering as initial wave function a constant, i.e., the ground state of an ideal Bose gas. This implies that the evolution toward the ground state is driven only by the imaginary time propagator, i.e., there is no importance sampling. For both phases we obtain results converging to those obtained by considering the best available variational wave function (the shadow wave function) as initial wave function. Moreover we obtain the same results even by considering wave functions with the wrong correlations, for instance, a wave function of a strongly localized Einstein crystal for the liquid phase. This convergence is true not only for diagonal properties such as the energy, the radial distribution function, and the static structure factor, but also for off-diagonal ones, such as the one-body density matrix. This robustness of PIGS can be traced back to the fact that the chosen initial wave function acts only at the beginning of the path without affecting the imaginary time propagator. From this analysis we conclude that zero temperature PIGS calculations can be as unbiased as those of finite temperature path integral Monte Carlo. On the other hand, a judicious choice of the initial wave function greatly improves the rate of convergence to the exact results.

Rossi, M.; Nava, M.; Reatto, L.; Galli, D. E.

2009-10-01

328

Assesing multileaf collimator effect on the build-up region using Monte Carlo method

NASA Astrophysics Data System (ADS)

Previous Monte Carlo studies have investigated the multileaf collimator (MLC) contribution to the build-up region for fields in which the MLC leaves were fully blocking the openings defined by the collimation jaws. In the present work, we investigate the same effect but for symmetric and asymmetric MLC defined field sizes (2×2, 4×4, 10×10 and 3×7 cm2). A Varian 2100C/D accelerator with 120-leaf MLC is accurately modeled for a 6MV photon beam using the BEAMnrc/EGSnrc code. Our results indicate that particles scattered from accelerator head and MLC are responsible for the increase of about 7% on the surface dose when comparing 2×2 and 10×10 cm2 fields. We found that the MLC contribution to the total build-up dose is about 2% for the 2×2 cm2 field and less than 1% for the largest fields.

Moreno, M. Zarza; Teixeira, N.; Jesus, A. P.; Mora, G.

2008-01-01

329

A Markov-Chain Monte-Carlo Based Method for Flaw Detection in Beams

A Bayesian inference methodology using a Markov Chain Monte Carlo (MCMC) sampling procedure is presented for estimating the parameters of computational structural models. This methodology combines prior information, measured data, and forward models to produce a posterior distribution for the system parameters of structural models that is most consistent with all available data. The MCMC procedure is based upon a Metropolis-Hastings algorithm that is shown to function effectively with noisy data, incomplete data sets, and mismatched computational nodes/measurement points. A series of numerical test cases based upon a cantilever beam is presented. The results demonstrate that the algorithm is able to estimate model parameters utilizing experimental data for the nodal displacements resulting from specified forces.

Glaser, R E; Lee, C L; Nitao, J J; Hickling, T L; Hanley, W G

2006-09-28

330

Modeling of radiation-induced bystander effect using Monte Carlo methods

NASA Astrophysics Data System (ADS)

Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.

Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun

2009-03-01

331

The spherical model of logarithmic potentials as examined by Monte Carlo methods

NASA Astrophysics Data System (ADS)

We examine Euler's equations for inviscid fluid flow by a discretized version representing the fluid as a piecewise constant finite approximation based on Voronoi cells. The strengths of these cells are constrained to conserve circulation and enstrophy. With this model we examine by a Monte Carlo Metropolis-Hastings algorithm the dependence of the system on such parameters as the number of points, the statistical mechanics temperature, and the number of sweeps used in the simulation. Tools to examine the system include the mean nearest neighbor parity, energy, distance between extreme-valued sites, and statistical study of an individual site or of all mesh site values. In negative statistical mechanics temperatures a solid-body rotation state is found. The positive-temperature state is not as strongly organized. Numerical evidence supports our expectation of a single phase transition, between positive and negative temperatures.

Lim, Chjan C.; Nebus, Joseph

2004-11-01

332

This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.

Wagner, John C [ORNL; Peplow, Douglas E. [ORNL; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL

2010-01-01

333

This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.

Wagner, John C [ORNL; Peplow, Douglas E. [ORNL; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL

2011-01-01

334

Shell model Monte Carlo method in the pn-formalism and applications to the Zr and Mo isotopes

We report the development of a new shell model Monte Carlo algorithm, which uses the proton-neutron formalism. Shell model Monte Carlo methods, within the isospin formulation, have been successfully used in large-scale shell model calculations. Motivation for this work is to extend the feasibility of these methods to shell model studies involving nonidentical proton and neutron valence spaces. We show the feasibility of the new approach with some test results. Finally, we use a realistic nucleon-nucleon interaction in the model space described by (1p{sub 1/2},0g{sub 9/2}) proton and (1d{sub 5/2},2s{sub 1/2},1d{sub 3/2},0g{sub 7/2},0h{sub 11/2}) neutron orbitals above the {sup 88}Sr core to calculate ground-state energies, binding energies, B(E2) strengths, and to study pairing properties of the even-even {sup 90-104}Zr and {sup 92-106}Mo isotope chains.

Oezen, C. [Department of Physics and Astronomy, University of Tennesee, Knoxville, Tennessee 37996 (United States); Dean, D.J. [Physics Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)

2006-01-15

335

The use of cylindrical coordinates for treatment planning parameters of an elongated 192Ir source

Purpose: The doses given to the intima, media, and adventitia are very crucial quantities in intravascular brachytherapy. To facilitate accurate computerized treatment planning calculations, we have determined dose distributions in away-and-along table format around an 192Ir wire source and developed pertinent dosimetric parameters in cylindrical coordinates.Methods and Materials: The Monte Carlo method (MCNP4C code) was used to calculate the dose

Neil S Patel; Sou-Tung Chiu-Tsao; Pei Fan; Hung-Sheng Tsao; Sam F Liprie; Louis B Harrison

2001-01-01

336

NASA Astrophysics Data System (ADS)

The aim of this work is to show a new scintigraphic device able to change automatically the length of its collimator in order to adapt the spatial resolution value to gamma source distance. This patented technique replaces the need for collimator change that standard gamma cameras still feature. Monte Carlo simulations represent the best tool in searching new technological solutions for such an innovative collimation structure. They also provide a valid analysis on response of gamma cameras performances as well as on advantages and limits of this new solution. Specifically, Monte Carlo simulations are realized with GEANT4 (GEometry ANd Tracking) framework and the specific simulation object is a collimation method based on separate blocks that can be brought closer and farther, in order to reach and maintain specific spatial resolution values for all source-detector distances. To verify the accuracy and the faithfulness of these simulations, we have realized experimental measurements with identical setup and conditions. This confirms the power of the simulation as an extremely useful tool, especially where new technological solutions need to be studied, tested and analyzed before their practical realization. The final aim of this new collimation system is the improvement of the SPECT techniques, with the real control of the spatial resolution value during tomographic acquisitions. This principle did allow us to simulate a tomographic acquisition of two capillaries of radioactive solution, in order to verify the possibility to clearly distinguish them.

Trinci, G.; Massari, R.; Scandellari, M.; Boccalini, S.; Costantini, S.; di Sero, R.; Basso, A.; Sala, R.; Scopinaro, F.; Soluri, A.

2010-09-01

337

NASA Astrophysics Data System (ADS)

The goal of this study was to quantify, in a heterogeneous phantom, the difference between experimentally measured beam profiles and those calculated using both a commercial convolution algorithm and the Monte Carlo (MC) method. This was done by arranging a phantom geometry that incorporated a vertical solid water-lung material interface parallel to the beam axis. At nominal x-ray energies of 6 and 18 MV, dose distributions were modelled for field sizes of 10 × 10 cm2 and 4 × 4 cm2 using the CadPlan 6.0 commercial treatment planning system (TPS) and the BEAMnrc-DOSXYZnrc Monte Carlo package. Beam profiles were found experimentally at various depths using film dosimetry. The results showed that within the lung region the TPS had a substantial problem modelling the dose distribution. The (film-TPS) profile difference was found to increase, in the lung region, as the field size decreased and the beam energy increased; in the worst case the difference was more than 15%. In contrast, (film-MC) profile differences were not found to be affected by the material density difference. BEAMnrc-DOSXYZnrc successfully modelled the material interface and dose profiles to within 2%.

Cranmer-Sargison, G.; Beckham, W. A.; Popescu, I. A.

2004-04-01

338

A Monte Carlo method of evaluating heterogeneous effects in plate-fueled reactors

Few-group nuclear cross sections for small plate-fueled, light and heavy water test reactors are frequently generated with unit cell models that contain a homogeneous mixture of fuel, cladding, and water. The heterogeneous unit cells do not need to be represented explicitly for neutronics calculations when the plate and coolant channel thicknesses are small compared with the mean-free-path of neutrons. However, neutron and photon heating calculations were performed with heterogeneous fuel models to predict accurately the heat deposited in the fuel meat, cladding, and coolant. Heat deposited in the coolant channels and outside the fuel elements does not have a direct impact on the peak fuel meat temperature but must be included in the total coolant system heat balance. The results of a heterogeneous Monte Carlo calculation that estimates the heat loads in different fuel regions are presented and the fact that similar homogeneous fuel models can be used for many calculations. The calculations presented here were performed on models of the Advanced Neutron Source (ANS) and the Massachusetts Institute of Technology Reactor 2 (MITR-2). The ANS is a small, 362-MW (fission), plate-fueled, heavy water reactor designed to produce an intense steady-state source of neutrons.

Thayer, R.C.; Redmond, E.L. II; Ryskamp, J.M. (Idaho National Engineering Lab., Idaho Falls (United States))

1991-01-01

339

Radiation dose performance in the triple-source CT based on a Monte Carlo method

NASA Astrophysics Data System (ADS)

Multiple-source structure is promising in the development of computed tomography, for it could effectively eliminate motion artifacts in the cardiac scanning and other time-critical implementations with high temporal resolution. However, concerns about the dose performance shade this technique, as few reports on the evaluation of dose performance of multiple-source CT have been proposed for judgment. Our experiments focus on the dose performance of one specific multiple-source CT geometry, the triple-source CT scanner, whose theories and implementations have already been well-established and testified by our previous work. We have modeled the triple-source CT geometry with the help of EGSnrc Monte Carlo radiation transport code system, and simulated the CT examinations of a digital chest phantom with our modified version of the software, using x-ray spectrum according to the data of physical tube. Single-source CT geometry is also estimated and tested for evaluation and comparison. Absorbed dose of each organ is calculated according to its real physics characteristics. Results show that the absorbed radiation dose of organs with the triple-source CT is almost equal to that with the single-source CT system. As the advantage of temporal resolution, the triple-source CT would be a better choice in the x-ray cardiac examination.

Yang, Zhenyu; Zhao, Jun

2012-10-01

340

NASA Astrophysics Data System (ADS)

Biotransformation of dissolved groundwater hydrocarbon plumes emanating from leaking underground fuel tanks should, in principle, result in plume length stabilization over relatively short distances, thus diminishing the environmental risk. However, because the behavior of hydrocarbon plumes is usually poorly constrained at most leaking underground fuel tank sites in terms of release history, groundwater velocity, dispersion, as well as the biotransformation rate, demonstrating such a limitation in plume length is problematic. Biotransformation signatures in the aquifer geochemistry, most notably elevated bicarbonate, may offer a means of constraining the relationship between plume length and the mean biotransformation rate. In this study, modeled plume lengths and spatial bicarbonate differences among a population of synthetic hydrocarbon plumes, generated through Monte Carlo simulation of an analytical solute transport model, are compared to field observations from six underground storage tank (UST) sites at military bases in California. Simulation results indicate that the relationship between plume length and the distribution of bicarbonate is best explained by biotransformation rates that are consistent with ranges commonly reported in the literature. This finding suggests that bicarbonate can indeed provide an independent means for evaluating limitations in hydrocarbon plume length resulting from biotransformation.

McNab, Walt W.

2001-02-01

341

A Monte Carlo boundary propagation method for the solution of Poisson's equation

To often the parallelism of a computational algorithm is used (or advertised) as a desirable measure of its performance. That is, the higher the computational parallelism the better the expected performance. With the current interest and emphasis on massively parallel computer systems, the notion of highly parallel algorithms is the subject of many conferences and funding proposals. Unfortunately, the revolution'' that this vision promises has served to further complicate the measure of parallel performance by the introduction of such notions as scaled speedup and scalable systems. As a counter example to the merits of highly parallel algorithms whose parallelism scales linearly with increasing problem size, we introduce a slight modification to a highly parallel Monte Carlo technique that is used to estimate the solution of Poisson's equation. This simple modification is shown to yield a much better estimate to the solution by incorporating a more efficient use of boundary data (Dirichlet boundary conditions). A by product of this new algorithm is a much more efficient sequential algorithm but at the expense of sacrificing parallelism. 3 refs.

Hiromoto, R.; Brickner, R.G.

1990-01-01

342

Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)

Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V [Department of Optics and Biomedical Physics, N.G.Chernyshevskii Saratov State University (Russian Federation)

2006-12-31

343

Simulation of Mach-Effect Illusion Using Three-Layered Retinal Cell Model and Monte Carlo Method

NASA Astrophysics Data System (ADS)

We proposed a novel retinal model capable of simulating Mach-effect, which is known as an optical illusion emphasizing edges of an object. The model was constructed by a rod cell layer, a bipolar cell layer, and a ganglion cell layer. Lateral inhibition and perceptive field networks were introduced between the layers, respectively. Photoelectric conversion for a single photon incidence at each rod cell was defined as an equation, and the input to the model was simulated as a distribution of transmitted photons through the input image for consecutive incidences by Monte Carlo method. Since this model successfully simulated not only Mach-effect illusion, but also DOG-like (Difference of Gaussian like) profile for a spot light incidence, the model was considered to form functionally the perceptive field of the retinal ganglion cell.

Ueno, Akinori; Arai, Ken; Miyashita, Osamu

344

NASA Astrophysics Data System (ADS)

To estimate the error of scattering coefficient spectrum determined by using double-integrating sphere system and inverse Monte Carlo method, optical properties of tissue phantom were measured. The tissue phantom was composed of hemoglobin, intralipid and gelatin. The thickness of samples (0.1-1.0 mm) and hemoglobin concentration (0.5-4.0 mg/ml) were changed and the effects of optical properties spectra were investigated. As the results, when the value of ?a was large, ?'s spectrum was not consistent with scattering theory. The higher hemoglobin concentration of samples was the lager the errors of ?'s spectra were. The thinner the sample was, the smaller the errors were. However ?a spectrum was not accurate when the sample was thin. It was predicted that when the sample thickness was 0.1 mm ?'s spectrum was accurate. And when the sample thickness was 1.0 mm, ?a spectrum was accurate.

Terada, Takaya; Nanjo, Takuya; Honda, Norihiro; Ishii, Katsunori; Awazu, Kunio

2011-02-01

345

NASA Astrophysics Data System (ADS)

Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.

Nasser, Hassan; Marre, Olivier; Cessac, Bruno

2013-03-01

346

NASA Astrophysics Data System (ADS)

Two-dimensional pattern reverse Monte Carlo (2D pattern RMC) analysis is performed to model the structures of nano-particles in uniaxially elongated rubbers using two-dimensional patterns of structure factor of the nano-particles obtained by time-resolved two-dimensional ultra-small angle x-ray scattering. Four spot patterns are observed for a large elongation ratio and the shapes change with increasing elongation ratio. We performed the 2D pattern RMC method for the uniaxial system in order to make a model of the structures from the two-dimensional structure factors. The preliminary results of the 2D pattern RMC analysis of the two-dimensional structure factors of silica particles in a uniaxially elongated styrene-butadiene rubber are presented.

Hagita, K.; Arai, T.; Kishimoto, H.; Umesaki, N.; Shinohara, Y.; Amemiya, Y.

2007-08-01

347

Two-dimensional pattern reverse Monte Carlo (2D pattern RMC) analysis is performed to model the structures of nano-particles in uniaxially elongated rubbers using two-dimensional patterns of structure factor of the nano-particles obtained by time-resolved two-dimensional ultra-small angle x-ray scattering. Four spot patterns are observed for a large elongation ratio and the shapes change with increasing elongation ratio. We performed the 2D pattern RMC method for the uniaxial system in order to make a model of the structures from the two-dimensional structure factors. The preliminary results of the 2D pattern RMC analysis of the two-dimensional structure factors of silica particles in a uniaxially elongated styrene-butadiene rubber are presented. PMID:21694140

Hagita, K; Arai, T; Kishimoto, H; Umesaki, N; Shinohara, Y; Amemiya, Y

2007-07-05

348

This paper analyzes the recalculation of plans for the exposure of patients with tumors of the lung, head and neck by the Monte Carlo method. There are presented the results of calculations with understating the dose by 29% when using the algorithm Ray-Tracing. It is proposed mandatory recalculation of dose by Monte Carlo method in planning exposure for patients with tumors of the lung and head and neck tumors to eliminate significant systematic errors in the values of input dose. PMID:23814835

Spizhenko, N Iu; Luchkovski?, S N; Chebotarėva, T I; Karnaukhova, A A; Bobrov, O E; Burik, V M

2013-01-01

349

A method for photon beam Monte Carlo multileaf collimator particle transport.

Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/- 1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/- 1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/- 1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry. PMID:12361220

Siebers, Jeffrey V; Keall, Paul J; Kim, Jong Oh; Mohan, Radhe

2002-09-01

350

A simple and easily implemented Monte Carlo algorithm is described which enables configurational-bias sampling of molecules containing branch points and rings with endocyclic and exocyclic atoms. The method overcomes well-known problems associated with sequential configurational-bias sampling methods. A "reservoir" or "library" of fragments are generated with known probability distributions dependent on stiff intramolecular degrees of freedom. Configurational-bias moves assemble the fragments into whole molecules using the energy associated with the remaining degrees of freedom. The methods for generating the fragments are validated on models of propane, isobutane, neopentane, cyclohexane, and methylcyclohexane. It is shown how the sampling method is implemented in the Gibbs ensemble, and validation studies are performed in which the liquid coexistence curves of propane, isobutane, and 2,2-dimethylhexane are computed and shown to agree with accepted values. The method is general and can be used to sample conformational space for molecules of arbitrary complexity in both open and closed statistical mechanical ensembles. PMID:21992296

Shah, Jindal K; Maginn, Edward J

2011-10-01

351

NASA Astrophysics Data System (ADS)

A simple and easily implemented Monte Carlo algorithm is described which enables configurational-bias sampling of molecules containing branch points and rings with endocyclic and exocyclic atoms. The method overcomes well-known problems associated with sequential configurational-bias sampling methods. A ``reservoir'' or ``library'' of fragments are generated with known probability distributions dependent on stiff intramolecular degrees of freedom. Configurational-bias moves assemble the fragments into whole molecules using the energy associated with the remaining degrees of freedom. The methods for generating the fragments are validated on models of propane, isobutane, neopentane, cyclohexane, and methylcyclohexane. It is shown how the sampling method is implemented in the Gibbs ensemble, and validation studies are performed in which the liquid coexistence curves of propane, isobutane, and 2,2-dimethylhexane are computed and shown to agree with accepted values. The method is general and can be used to sample conformational space for molecules of arbitrary complexity in both open and closed statistical mechanical ensembles.

Shah, Jindal K.; Maginn, Edward J.

2011-10-01

352

NASA Astrophysics Data System (ADS)

The purpose of this work was to extend the verification of Monte Carlo based methods for estimating radiation dose in computed tomography (CT) exams beyond a single CT scanner to a multidetector CT (MDCT) scanner, and from cylindrical CTDI phantom measurements to both cylindrical and physical anthropomorphic phantoms. Both cylindrical and physical anthropomorphic phantoms were scanned on an MDCT under the specified conditions. A pencil ionization chamber was used to record exposure for the cylindrical phantom, while MOSFET (metal oxide semiconductor field effect transistor) detectors were used to record exposure at the surface of the anthropomorphic phantom. Reference measurements were made in air at isocentre using the pencil ionization chamber under the specified conditions. Detailed Monte Carlo models were developed for the MDCT scanner to describe the x-ray source (spectra, bowtie filter, etc) and geometry factors (distance from focal spot to isocentre, source movement due to axial or helical scanning, etc). Models for the cylindrical (CTDI) phantoms were available from the previous work. For the anthropomorphic phantom, CT image data were used to create a detailed voxelized model of the phantom's geometry. Anthropomorphic phantom material compositions were provided by the manufacturer. A simulation of the physical scan was performed using the mathematical models of the scanner, phantom and specified scan parameters. Tallies were recorded at specific voxel locations corresponding to the MOSFET physical measurements. Simulations of air scans were performed to obtain normalization factors to convert results to absolute dose values. For the CTDI body (32 cm) phantom, measurements and simulation results agreed to within 3.5% across all conditions. For the anthropomorphic phantom, measured surface dose values from a contiguous axial scan showed significant variation and ranged from 8 mGy/100 mAs to 16 mGy/100 mAs. Results from helical scans of overlapping pitch (0.9375) and extended pitch (1.375) were also obtained. Comparisons between the MOSFET measurements and the absolute dose value derived from the Monte Carlo simulations demonstrate agreement in terms of absolute dose values as well as the spatially varying characteristics. This work demonstrates the ability to extend models from a single detector scanner using cylindrical phantoms to an MDCT scanner using both cylindrical and anthropomorphic phantoms. Future work will be extended to voxelized patient models of different sizes and to other MDCT scanners.

DeMarco, J. J.; Cagnon, C. H.; Cody, D. D.; Stevens, D. M.; McCollough, C. H.; O'Daniel, J.; McNitt-Gray, M. F.

2005-09-01

353

A Monte Carlo sampling method for drawing representative samples from large databases

Sampling is important in areas like data mining, OLAP, selectivity estimation, clustering, etc. It has also become a necessity in social, economical, engineering, scientific, and statistical studies where databases are too large to handle. In this paper, a sampling method based on the Metropolis algorithm is proposed. Unlike the conventional uniform sampling methods, this method is able to select objects

Hong Guo; Wen-Chi Hou; Feng Yan; Qiang Zhu

2004-01-01

354

A Monte Carlo Sampling Method for Drawing Representative Samples from Large Databases

Sampling is important in areas like data mining, OLAP, selectivity estimation, clustering, etc. It has also become a necessity in social, economical, engineering, scientific, and statistical studies where databases are too large to handle. In this paper, a sampling method based on the Metropolis algorithm is proposed. Unlike the conventional uniform sampling methods, this method is able to select objects

Hong Guo; Wen-chi Hou; Feng Yan; Qiang Zhu

2004-01-01

355

Monte Carlo Methods for Appraisal and Valuation: A Case Study of a Nuclear Power Plant

Appraisals typically are conducted using four standard methods approved by the American Society of Appraisers. For large-scale, technically unique projects, such as chemical and power plants, and old industrial practices, these standard methods are insufficient. These types of projects contain political, technical, and economic risks that are not accounted for in standard valuation methods. To include these risks in an

David C. Rode; Paul S. Fischbeck; Steve R. Dean

2001-01-01

356

The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code MANTIS, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fastDETECT2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the MANTIS code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify PENELOPE (the open source software package that handles the x-ray and electron transport in MANTIS) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fastDETECT2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybridMANTIS approach achieves a significant speed-up factor of 627 when compared to MANTIS and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybridMANTIS, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical tox-ray transport. The new code requires much less memory than MANTIS and, asa result, allows us to efficiently simulate large area detectors. PMID:22469917

Sharma, Diksha; Badal, Andreu; Badano, Aldo

2012-04-02

357

We present for the first time in the literature a new scheme of kinetic Monte Carlo method applied on a grand canonical ensemble, which we call hereafter GC-kMC. It was shown recently that the kinetic Monte Carlo (kMC) scheme is a very effective tool for the analysis of equilibrium systems. It had been applied in a canonical ensemble to describe vapor-liquid equilibrium of argon over a wide range of temperatures, gas adsorption on a graphite open surface and in graphitic slit pores. However, in spite of the conformity of canonical and grand canonical ensembles, the latter is more relevant in the correct description of open systems; for example, the hysteresis loop observed in adsorption of gases in pores under sub-critical conditions can only be described with a grand canonical ensemble. Therefore, the present paper is aimed at an extension of the kMC to open systems. The developed GC-kMC was proved to be consistent with the results obtained with the canonical kMC (C-kMC) for argon adsorption on a graphite surface at 77 K and in graphitic slit pores at 87.3 K. We showed that in slit micropores the hexagonal packing in the layers adjacent to the pore walls is observed at high loadings even at temperatures above the triple point of the bulk phase. The potential and applicability of the GC-kMC are further shown with the correct description of the heat of adsorption and the pressure tensor of the adsorbed phase. PMID:22767023

Ustinov, E A; Do, D D

2012-07-06

358

Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicines Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.

Shi, C. Y.; Xu, X. George; Stabin, Michael G.

2008-01-01

359

NASA Astrophysics Data System (ADS)

In performing a Bayesian analysis of astronomical data, two difficult problems often emerge. First, in estimating the parameters of some model for the data, the resulting posterior distribution may be multimodal or exhibit pronounced (curving) degeneracies, which can cause problems for traditional Markov Chain Monte Carlo (MCMC) sampling methods. Secondly, in selecting between a set of competing models, calculation of the Bayesian evidence for each model is computationally expensive using existing methods such as thermodynamic integration. The nested sampling method introduced by Skilling, has greatly reduced the computational expense of calculating evidence and also produces posterior inferences as a by-product. This method has been applied successfully in cosmological applications by Mukherjee, Parkinson & Liddle, but their implementation was efficient only for unimodal distributions without pronounced degeneracies. Shaw, Bridges & Hobson recently introduced a clustered nested sampling method which is significantly more efficient in sampling from multimodal posteriors and also determines the expectation and variance of the final evidence from a single run of the algorithm, hence providing a further increase in efficiency. In this paper, we build on the work of Shaw et al. and present three new methods for sampling and evidence evaluation from distributions that may contain multiple modes and significant degeneracies in very high dimensions; we also present an even more efficient technique for estimating the uncertainty on the evaluated evidence. These methods lead to a further substantial improvement in sampling efficiency and robustness, and are applied to two toy problems to demonstrate the accuracy and economy of the evidence calculation and parameter estimation. Finally, we discuss the use of these methods in performing Bayesian object detection in astronomical data sets, and show that they significantly outperform existing MCMC techniques. An implementation of our methods will be publicly released shortly.

Feroz, F.; Hobson, M. P.

2008-02-01

360

NASA Astrophysics Data System (ADS)

At re-entry velocities it is generally agreed that the radiation associated with transitions between excited electronic states of atoms and molecules is responsible for the bulk of the thermal radiation emitted from the shock wave area. This paper deals with the evaluation of thermal radiation emitted from hypersonic shock waves in real air using the Direct Simulation Monte Carlo method. The calculation of electronic excitation is made without assuming equilibrium for the distribution of the energy states and measured or theoretically evaluated cross sections are used to determine the electronic excitation of atoms and molecules in the flow and the subsequent thermal radiation. The results with this new scheme are compared with available experimental data and existing numerical methods. The test cases are based on an AVCO Everett shock tube experiment and on the two dimensional flow field of a blunted Mars-net re-entry vehicle. The method is in good agreement with both experimental data and results given by other methods. Discrepancies are evaluated and discussed.

Gallis, M. A.; Harvey, J. K.

1993-07-01

361

A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP)

NASA Astrophysics Data System (ADS)

Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.

Bitar, A.; Lisbona, A.; Thedrez, P.; Sai Maurel, C.; LeForestier, D.; Barbet, J.; Bardies, M.

2007-02-01

362

NASA Astrophysics Data System (ADS)

Parametric uncertainty in groundwater modeling is commonly assessed using the first-order-second-moment method, which yields the linear confidence/prediction intervals. More advanced techniques are able to produce the nonlinear confidence/prediction intervals that are more accurate than the linear intervals for nonlinear models. However, both the methods are restricted to certain assumptions such as normality in model parameters. We developed a Markov Chain Monte Carlo (MCMC) method to directly investigate the parametric distributions and confidence/prediction intervals. The MCMC results are used to evaluate accuracy of the linear and nonlinear confidence/prediction intervals. The MCMC method is applied to nonlinear surface complexation models developed by Kohler et al. (1996) to simulate reactive transport of uranium (VI). The breakthrough data of Kohler et al. (1996) obtained from a series of column experiments are used as the basis of the investigation. The calibrated parameters of the models are the equilibrium constants of the surface complexation reactions and fractions of functional groups. The Morris method sensitivity analysis shows that all of the parameters exhibit highly nonlinear effects on the simulation. The MCMC method is combined with traditional optimization method to improve computational efficiency. The parameters of the surface complexation models are first calibrated using a global optimization technique, multi-start quasi-Newton BFGS, which employs an approximation to the Hessian. The parameter correlation is measured by the covariance matrix computed via the Fisher information matrix. Parameter ranges are necessary to improve convergence of the MCMC simulation, even when the adaptive Metropolis method is used. The MCMC results indicate that the parameters do not necessarily follow a normal distribution and that the nonlinear intervals are more accurate than the linear intervals for the nonlinear surface complexation models. In comparison with the linear and nonlinear prediction intervals, the prediction intervals of MCMC are more robust to simulate the breakthrough curves that are not used for the parameter calibration and estimation of parameter distributions.

Miller, G. L.; Lu, D.; Ye, M.; Curtis, G. P.; Mendes, B. S.; Draper, D.

2010-12-01

363

Efficient Markov Chain Monte Carlo Methods for Decoding Neural Spike Trains

Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior

Yashar Ahmadian; Jonathan W. Pillow; Liam Paninski

2011-01-01

364

Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes

The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.

Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.

2002-09-11

365

NASA Astrophysics Data System (ADS)

Particle filters (PFs) have become popular for assimilation of a wide range of hydrologic variables in recent years. With this increased use, it has become necessary to increase the applicability of this technique for use in complex hydrologic/land surface models and to make these methods more viable for operational probabilistic prediction. To make the PF a more suitable option in these scenarios, it is necessary to improve the reliability of these techniques. Improved reliability in the PF is achieved in this work through an improved parameter search, with the use of variable variance multipliers and Markov Chain Monte Carlo methods. Application of these methods to the PF allows for greater search of the posterior distribution, leading to more complete characterization of the posterior distribution and reducing risk of sample impoverishment. This leads to a PF that is more efficient and provides more reliable predictions. This study introduces the theory behind the proposed algorithm, with application on a hydrologic model. Results from both real and synthetic studies suggest that the proposed filter significantly increases the effectiveness of the PF, with marginal increase in the computational demand for hydrologic prediction.

Moradkhani, Hamid; Dechant, Caleb M.; Sorooshian, Soroosh

2012-12-01

366

High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fre?chet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging. PMID:21361702

Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

2011-02-01

367

NASA Astrophysics Data System (ADS)

High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

2011-02-01

368

Load displacement analysis of drilled shafts can be accomplished by utilizing the t-z method, which models soil resistance\\u000a along the length and tip of the drilled shaft as a series of springs. For non-linear soil springs, the governing differential\\u000a equation that describes the soil-structure interaction may be discretized into a set of algebraic equations based upon finite\\u000a difference methods. This

Anil Misra; Lance A. Roberts; Steven M. Levorson

2007-01-01

369

NASA Astrophysics Data System (ADS)

An essential component of a quantitative landslide hazard assessment is establishing the extent of the endangered area. This task requires accurate prediction of the run-out behaviour of a landslide, which includes the estimation of the run-out distance, run-out width, velocities, pressures, and depth of the moving mass and the final configuration of the deposits. One approach to run-out modelling is to reproduce accurately the dynamics of the propagation processes. A number of dynamic numerical models are able to compute the movement of the flow over irregular topographic terrains (3-D) controlled by a complex interaction between mechanical properties that may vary in space and time. Given the number of unknown parameters and the fact that most of the rheological parameters cannot be measured in the laboratory or field, the parametrization of run-out models is very difficult in practice. For this reason, the application of run-out models is mostly used for back-analysis of past events and very few studies have attempted to achieve forward predictions. Consequently all models are based on simplified descriptions that attempt to reproduce the general features of the failed mass motion through the use of parameters (mostly controlling shear stresses at the base of the moving mass) which account for aspects not explicitly described or oversimplified. The uncertainties involved in the run-out process have to be approached in a stochastic manner. It is of significant importance to develop methods for quantifying and properly handling the uncertainties in dynamic run-out models, in order to allow a more comprehensive approach to quantitative risk assessment. A method was developed to compute the variation in run-out intensities by using a dynamic run-out model (MassMov2D) and a probabilistic framework based on a Monte Carlo simulation in order to analyze the effect of the uncertainty of input parameters. The probability density functions of the rheological parameters were generated and sampled leading to a large number of run-out scenarios. In the application of the Monte Carlo method, random samples were generated from the input probability distributions that fitted a Gaussian copula distribution. Each set of samples was used as input to model simulation and the resulting outcome was a spatially displayed intensity map. These maps were created with the results of the probability density functions at each point of the flow track and the deposition zone, having as an output a confidence probability map for the various intensity measures. The goal of this methodology is that the results (in terms of intensity characteristics) can be linked directly to vulnerability curves associated to the elements at risk.

Cepeda, Jose; Luna, Byron Quan; Nadim, Farrokh

2013-04-01

370

The structure of axisymmetrical dc planar magnetron discharge is clarified by the use of the particle-in-cell\\/Monte Carlo method. The magnetron has two concentric cylindrical magnets behind the cathode. Here the effects of magnetization in the magnets, M, and the emission coefficient of secondary electrons, icons\\/Journals\\/Common\\/gamma\\

S. Kondo; K. Nanbu

1999-01-01

371

The Guide to Uncertainty of Measurement (GUM) approach and the adaptive Monte Carlo method (MCM) provide two alternative approaches for the propagation stage of the uncertainty estimation. These two approaches are implemented and compared concerning the 95% coverage interval estimation of the measurement of Gross Heat of Combustion (GHC) of an automotive diesel fuel by bomb calorimetry. The GUM approach,

D. Theodorou; Y. Zannikou; G. Anastopoulos; F. Zannikos

2011-01-01

372

A direct simulation Monte Carlo (DSMC) method is used to solve a slider air bearing problem when isolated contacts occur. A flat slider with a spherical asperity underneath is simulated when the tip of the asperity contacts a plate moving at constant speed. Two-dimensional pressure profiles are obtained by using the ideal gas law. The air bearing force on the

Weidong Huang; D. B. Bogy

1998-01-01

373

Direct simulation Monte Carlo (DSMC): A numerical method for transition-regime flows-A review

In fluid flow situations, the Navier -Stokes equations cannot adequately describe the transition and the free m o- lecular regimes. In these regimes, the Boltzmann equation of kinetic theory is invoked to govern the flows. But this equation cannot be solved easily, neither by analytical techniques nor by numerical methods. Hence, in order to manoeuvre around this equation, Bird introduced

P. S. PRASANTH; JOSE K. KAKKASSERY

2006-01-01

374

A Monte Carlo evaluation of the channel ratio scatter correction method

Several methods exist to eliminate the contribution of scattered photons during imaging. One of these, the channel ratio (CR) scatter correction technique, uses the change in the ratio of counts from two symmetrical adjacent energy windows straddling the energy photopeak. The accuracy of the results depends upon the assumption that the ratio of the scatter components in the two windows

Hanlie Naudé; Andries van Aswegen; Charles P. Herbst; Mattheus G. Lötter; P. Hendrik Pretorius

1996-01-01

375

The purpose of this paper is to quantify uncertainties of fuel pin cell or fuel assembly (FA) homogenized few group diffusion theory constants generated from the B1 theory-augmented Monte Carlo (MC) method. A mathematical formulation of the first kind is presented to quantify uncertainties of the few group constants in terms of the two major sources of the MC method; statistical and nuclear cross section and nuclide number density input data uncertainties. The formulation is incorporated into the Seoul National Univ. MC code McCARD. It is then used to compute the uncertainties of the burnup-dependent homogenized two group constants of a low-enriched UO{sub 2} fuel pin cell and a PWR FA on the condition that nuclear cross section input data of U-235 and U-238 from JENDL 3.3 library and nuclide number densities from the solution to fuel depletion equations have uncertainties. The contribution of the MC input data uncertainties to the uncertainties of the two group constants of the two fuel systems is separated from that of the statistical uncertainties. The utilities of uncertainty quantifications are then discussed from the standpoints of safety analysis of existing power reactors, development of new fuel or reactor system design, and improvement of covariance files of the evaluated nuclear data libraries. (authors)

Park, H. J. [Korea Atomic Energy Research Inst., Daedeokdaero 989-111, Yuseong-gu, Daejeon (Korea, Republic of); Shim, H. J.; Joo, H. G.; Kim, C. H. [Dept. of Nuclear Engineering, Seoul National Univ., 1 Gwanak-ro, Gwanak-gu, Seoul (Korea, Republic of)

2012-07-01

376

Accelerated kinetics of amorphous silicon using an on-the-fly off-lattice kinetic Monte-Carlo method

NASA Astrophysics Data System (ADS)

The time evolution of a series of well relaxed amorphous silicon models was simulated using the kinetic Activation-RelaxationTechnique (kART), an on-the-fly off-lattice kinetic Monte Carlo method [1]. This novel algorithm uses the ART nouveau algorithm to generate activated events and links them with local topologies. It was shown to work well for crystals with few defects but this is the first time it is used to study an amorphous material. A parallel implementation allows us to increase the speed of the event generation phase. After each KMC step, new searches are initiated for each new topology encountered. Well relaxed amorphous silicon models of 1000 atoms described by a modified version of the empirical Stillinger-Weber potential [2] were used as a starting point for the simulations. Initial results show that the method is faster by orders of magnitude compared to conventional MD simulations up to temperatures of 500 K. Vacancy-type defects were also introduced in this system and their stability and lifetimes are calculated. [4pt] [1] El-Mellouhi et al., ,Phys Rev. B, 78, 153202 (2008)[0pt] [2] Vink et al., J. Non-Cryst. Sol. 282, 248 (2001)

Joly, Jean-Francois; El-Mellouhi, Fedwa; Beland, Laurent Karim; Mousseau, Normand

2011-03-01

377

X-ray microanalysis by analytical electron microscopy (AEM) has proven to be a powerful tool for characterizing the spatial distribution of solute elements in materials. True compositional variations over spatial scales smaller than the actual resolution for microanalysis can be determined if the measured composition profile is deconvoluted. Explicit deconvolutions of such data, via conventional techniques such as Fourier transforms, are not possible due to statistical noise in AEM microanalytical data. Hence, the method of choice is to accomplish the deconvolution via iterative convolutions. In this method, a function describing the assumed true composition profile, calculated by physically permissible thermodynamic and kinetic modeling, is convoluted with the x-ray generation function and the result compared to the measured composition profile. If the measured and calculated profiles agree within experimental error, it is assumed that the true compositional profile has been determined. If the measured and calculated composition profiles are in disagreement, the assumptions in the physical model are adjusted and the convolution process repeated. To employ this procedure it is necessary to calculate the x-ray generation function explicitly. While a variety of procedures are available for calculating this function, the most accurate procedure is to use Monte Carlo modeling of electron scattering.

Romig, A.D. Jr.; Plimpton, S.J.; Michael, J.R. (Sandia National Labs., Albuquerque, NM (USA)); Myklebust, R.L.; Newbury, D.E. (National Inst. of Standards and Technology, Gaithersburg, MD (USA))

1990-01-01

378

Monte Carlo Small-Sample Perturbation Calculations.

National Technical Information Service (NTIS)

Two different Monte Carlo methods have been developed for benchmark computations of small-sample-worths in simplified geometries. The first is basically a standard Monte Carlo perturbation method in which neutrons are steered towards the sample by roulett...

U. Feldman E. Gelbard R. Blomquist

1983-01-01

379

In biomedical, Monte-carlo simulation is commonly used for simulation of light diffusion in tissue. But, most of previous studies did not consider a radial beam LED as light source. Therefore, we considered characteristics of a radial beam LED and applied them on MC simulation as light source. In this paper, we consider 3 characteristics of radial beam LED. The first is an initial launch area of photons. The second is an incident angle of a photon at an initial photon launching area. The third is the refraction effect according to contact area between LED and a turbid medium. For the verification of the MC simulation, we compared simulation and experimental results. The average of the correlation coefficient between simulation and experimental results is 0.9954. Through this study, we show an effective method to simulate light diffusion on tissue with characteristics for radial beam LED based on MC simulation. PMID:24109615

Song, Sangha; Elgezua, Inko; Kobayashi, Yo; Fujie, Masakatsu G

2013-07-01

380

NASA Astrophysics Data System (ADS)

Health prognosis of equipment is considered as a key process of the condition based maintenance strategy. It contributes to reduce the related risks and the maintenance costs of equipment and improve the availability, the reliability and the security of equipment. However, equipment often operates under dynamically operational and environmental conditions, and its lifetime is generally described by the monitored nonlinear time-series data. Equipment subjects to high levels of uncertainty and unpredictability so that effective methods for its online health prognosis are still in need now. This paper addresses prognostic methods based on hidden semi-Markov model (HSMM) by using sequential Monte Carlo (SMC) method. HSMM is applied to obtain the transition probabilities among health states and the state durations. The SMC method is adopted to describe the probability relationships between health states and the monitored observations of equipment. This paper proposes a novel multi-step-ahead health recognition algorithm based on joint probability distribution to recognize the health states of equipment and its health state change point. A new online health prognostic method is also developed to estimate the residual useful lifetime (RUL) values of equipment. At the end of the paper, a real case study is used to demonstrate the performance and potential applications of the proposed methods for online health prognosis of equipment.

Liu, Qinming; Dong, Ming; Peng, Ying

2012-10-01

381

Simulation of 12C+12C elastic scattering at high energy by using the Monte Carlo method

NASA Astrophysics Data System (ADS)

The Monte Carlo method is used to simulate the 12C+12C reaction process. Taking into account the size of the incident 12C beam spot and the thickness of the 12C target, the distributions of scattered 12C on the MWPC and the CsI detectors at a detective distance have been simulated. In order to separate elastic scattering from the inelastic scattering with 4.4 MeV excited energy, we set several variables: the kinetic energy of incident 12C, the thickness of the 12C target, the ratio of the excited state, the wire spacing of the MWPC, the energy resolution of the CsI detector and the time resolution of the plastic scintillator. From the simulation results, the preliminary establishment of the experiment system can be determined to be that the beam size of the incident 12C is phi5 mm, the incident kinetic energy is 200-400 A MeV, the target thickness is 2 mm, the ratio of the excited state is 20%, the flight distance of scattered 12C is 3 m, the energy resolution of the CsI detectors is 1%, the time resolution of the plastic scintillator is 0.5%, and the size of the CsI detectors is 7 cm×7 cm, and we need at least 16 CsI detectors to cover a 0° to 5° angular distribution.

Guo, Chen-Lei; Zhang, Gao-Long; Tanihata, I.; Le, Xiao-Yun

2012-03-01

382

Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.

Kuss, M.; Markel, T.; Kramer, W.

2011-01-01

383

We have developed a "red blood cell (RBC)-photon simulator" to reveal optical propagation in prethrombus blood for various levels of RBC density and aggregation. The simulator investigates optical propagation in the prethrombus blood and will be applied to detect it noninvasively for thrombosis prevention in an earlier stage. In our simulator, Lambert-Beer's law is employed to simulate the absorption of RBCs with hemoglobin, while the Monte Carlo method is applied to simulate scattering through iterative calculations. One advantage of our simulator is that concentrations and distributions of RBCs can be arbitrarily chosen to exhibit the prethrombus, while conventional models cannot. Using the simulator, we found that various levels of RBC density and aggregation have different effects on the optical propagation of near-infrared response light in blood. The same different effects were acquired in in vitro experiments with 12 bovine blood samples, which were performed to evaluate the simulator. We measured RBC density using the clinical hematocrit index and RBC aggregation using activated whole blood clotting time. The experimental results correspond to the simulator results well. Therefore, we could show that our simulator exhibits the correct optical propagation for prethrombus blood and is applicable for the prethrombus detection using multiple detectors. PMID:21342854

Oshima, Shiori; Sankai, Yoshiyuki

2011-02-22

384

Two-dimensional spatial intensity distributions of diffuse scattering of near-infrared laser radiation from a strongly scattering medium, whose optical properties are close to those of skin, are obtained using Monte Carlo simulation. The medium contains a cylindrical inhomogeneity with the optical properties, close to those of blood. It is shown that stronger absorption and scattering of light by blood compared to the surrounding medium leads to the fact that the intensity of radiation diffusely reflected from the surface of the medium under study and registered at its surface has a local minimum directly above the cylindrical inhomogeneity. This specific feature makes the method of spatially-resolved reflectometry potentially applicable for imaging blood vessels and determining their sizes. It is also shown that blurring of the vessel image increases almost linearly with increasing vessel embedment depth. This relation may be used to determine the depth of embedment provided that the optical properties of the scattering media are known. The optimal position of the sources and detectors of radiation, providing the best imaging of the vessel under study, is determined. (biophotonics)

Bykov, A V; Priezzhev, A V; Myllylae, Risto A

2011-06-30

385

NASA Astrophysics Data System (ADS)

We calculate the ground-state properties of an unpolarized two-component Fermi gas with the aid of the diffusion quantum Monte Carlo (DMC) methods. Using an extrapolation to the zero effective range of the attractive two-particle interaction, we find E/Efree in the unitary limit to be 0.212(2), 0.407(2), 0.409(3), and 0.398(3) for 4, 14, 38, and 66 atoms, respectively. Our calculations indicate that the dependence of the total energy on the effective range of the interaction Reff is sizable and the extrapolation to Reff=0 is therefore important for reaching the true unitary limit. To test the quality of nodal surfaces and to estimate the impact of the fixed-node approximation, we perform released-node DMC calculations for 4 and 14 atoms. Analysis of the released-node and the fixed-node results suggests that the main sources of the fixed-node errors are long-range correlations, which are difficult to sample in the released-node approaches due to the fast growth of the bosonic noise. Besides energies, we evaluate the two-body density matrix and the condensate fraction. We find that the condensate fraction for the 66-atom system converges to 0.56(1) after the extrapolation to the zero interaction range.

Li, Xin; Koloren?, Jind?ich; Mitas, Lubos

2011-08-01

386

NASA Astrophysics Data System (ADS)

We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.

Kohno, R.; Hotta, K.; Nishioka, S.; Matsubara, K.; Tansho, R.; Suzuki, T.

2011-11-01

387

We calculate the ground-state properties of an unpolarized two-component Fermi gas with the aid of the diffusion quantum Monte Carlo (DMC) methods. Using an extrapolation to the zero effective range of the attractive two-particle interaction, we find E/E{sub free} in the unitary limit to be 0.212(2), 0.407(2), 0.409(3), and 0.398(3) for 4, 14, 38, and 66 atoms, respectively. Our calculations indicate that the dependence of the total energy on the effective range of the interaction R{sub eff} is sizable and the extrapolation to R{sub eff}=0 is therefore important for reaching the true unitary limit. To test the quality of nodal surfaces and to estimate the impact of the fixed-node approximation, we perform released-node DMC calculations for 4 and 14 atoms. Analysis of the released-node and the fixed-node results suggests that the main sources of the fixed-node errors are long-range correlations, which are difficult to sample in the released-node approaches due to the fast growth of the bosonic noise. Besides energies, we evaluate the two-body density matrix and the condensate fraction. We find that the condensate fraction for the 66-atom system converges to 0.56(1) after the extrapolation to the zero interaction range.

Li Xin; Mitas, Lubos [Department of Physics, North Carolina State University, Raleigh, North Carolina 27695 (United States); Kolorenc, Jindrich [Department of Physics, North Carolina State University, Raleigh, North Carolina 27695 (United States); Institut fuer Theoretische Physik, Universitaet Hamburg, Jungiusstrasse 9, 20355 Hamburg (Germany)

2011-08-15

388

Study of Phase Equilibria of Petrochemical Fluids using Gibbs Ensemble Monte Carlo Methods

NASA Astrophysics Data System (ADS)

Knowledge of phase behavior of hydrocarbons and related compounds are highly of interest to chemical and petrochemical industries. For example, design of processes such as supercritical fluid extraction, petroleum refining, enhanced oil recovery, gas treatment, and fractionation of wax products. A precise knowledge of the phase equilibria of alkanes, alkenes and related compounds and their mixtures are required for efficient design of these processes. Experimental studies to understand the related phase equilibria often become unsuitable for various reasons. With the advancement of simulation technology, molecular simulations could provide a useful complement and alternative in the study and description of phase behavior of these systems. In this work we study vapor-liquid phase equilibria of pure hydrocarbons and their mixtures using Gibbs ensemble simulation. Insertion of long and articulated chain molecules are facilitated in our simulations by means of configurational bias and expanded ensemble methods. We use the newly developed NERD force field in our simulation. In this work NERD force field is extended to provide coverage for hydrocarbons with any arbitrary architecture. Our simulation results provide excellent quantitative agreement with available experimental phase equilibria data for both the pure components and mixtures.

Nath, Shyamal

2001-03-01

389

An electrothermal Monte Carlo (MC) method is applied in this paper to investigate electron transport in submicrometer wurtzite GaN\\/AlGaN high-electron mobility transistors (HEMTs) grown on various substrate materials including SiC, Si, GaN, and sapphire. The simulation method is an iterative technique that alternately runs an MC electronic simulation and solves the heat diffusion equation using an analytical thermal resistance matrix

Toufik Sadi; Robert W. Kelsall; Neil J. Pilgrim

2006-01-01

390

NASA Astrophysics Data System (ADS)

Recent upgrades of the MCNPX Monte Carlo code include transport of heavy ions. We employed the new code to simulate the energy and dose distributions produced by carbon beams in rabbit's head in and around a brain tumor. The work was within our experimental technique of interlaced carbon microbeams, which uses two 90° arrays of parallel, thin planes of carbon beams (microbeams) interlacing to produce a solid beam at the target. A similar version of the method was earlier developed with synchrotron-generated x-ray microbeams. We first simulated the Bragg peak in high density polyethylene and other materials, where we could compare the calculated carbon energy deposition to the measured data produced at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL). The results showed that new MCNPX code gives a reasonable account of the carbon beam's dose up to ~200 MeV/nucleon beam energy. At higher energies, which were not relevant to our project, the model failed to reproduce the Bragg-peak's extent of increasing nuclear breakup tail. In our model calculations we determined the dose distribution along the beam path, including the angular straggling of the microbeams, and used the data for determining the optimal values of beam spacing in the array for producing adequate beam interlacing at the target. We also determined, for the purpose of Bragg-peak spreading at the target, the relative beam intensities of the consecutive exposures with stepwise lower beam energies, and simulated the resulting dose distribution in the spread out Bragg-peak. The details of the simulation methods used and the results obtained are presented.

Dioszegi, I.; Rusek, A.; Dane, B. R.; Chiang, I. H.; Meek, A. G.; Dilmanian, F. A.

2011-06-01

391

Recent upgrades of the MCNPX Monte Carlo code include transport of heavy ions. We employed the new code to simulate the energy and dose distributions produced by carbon beams in rabbit's head in and around a brain tumor. The work was within our experimental technique of interlaced carbon microbeams, which uses two 90 deg. arrays of parallel, thin planes of carbon beams (microbeams) interlacing to produce a solid beam at the target. A similar version of the method was earlier developed with synchrotron-generated x-ray microbeams. We first simulated the Bragg peak in high density polyethylene and other materials, where we could compare the calculated carbon energy deposition to the measured data produced at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL). The results showed that new MCNPX code gives a reasonable account of the carbon beam's dose up to {approx}200 MeV/nucleon beam energy. At higher energies, which were not relevant to our project, the model failed to reproduce the Bragg-peak's extent of increasing nuclear breakup tail. In our model calculations we determined the dose distribution along the beam path, including the angular straggling of the microbeams, and used the data for determining the optimal values of beam spacing in the array for producing adequate beam interlacing at the target. We also determined, for the purpose of Bragg-peak spreading at the target, the relative beam intensities of the consecutive exposures with stepwise lower beam energies, and simulated the resulting dose distribution in the spread out Bragg-peak. The details of the simulation methods used and the results obtained are presented.

Dioszegi, I. [Nonproliferation and National Security Department, Brookhaven National Laboratory, Upton, New York 11973 (United States); Rusek, A.; Chiang, I. H. [NASA Space Radiation Laboratory, Brookhaven National Laboratory, Upton, NY 11973 (United States); Dane, B. R. [Medical School, State University of New York at Stony Brook, Stony Brook, NY 11794 (United States); Meek, A. G. [Department of Radiation Oncology, State University of New York at Stony Brook, Stony Brook, NY 11794 (United States); Dilmanian, F. A. [Department of Radiation Oncology, State University of New York at Stony Brook, Stony Brook, NY 11794 (United States); Medical Department, Brookhaven National Laboratory, Upton, NY 11973 (United States)

2011-06-01

392

NASA Astrophysics Data System (ADS)

Straightness error is an important parameter in measuring high-precision shafts. New generation geometrical product specification(GPS) requires the measurement uncertainty characterizing the reliability of the results should be given together when the measurement result is given. Nowadays most researches on straightness focus on error calculation and only several research projects evaluate the measurement uncertainty based on "The Guide to the Expression of Uncertainty in Measurement(GUM)". In order to compute spatial straightness error(SSE) accurately and rapidly and overcome the limitations of GUM, a quasi particle swarm optimization(QPSO) is proposed to solve the minimum zone SSE and Monte Carlo Method(MCM) is developed to estimate the measurement uncertainty. The mathematical model of minimum zone SSE is formulated. In QPSO quasi-random sequences are applied to the generation of the initial position and velocity of particles and their velocities are modified by the constriction factor approach. The flow of measurement uncertainty evaluation based on MCM is proposed, where the heart is repeatedly sampling from the probability density function(PDF) for every input quantity and evaluating the model in each case. The minimum zone SSE of a shaft measured on a Coordinate Measuring Machine(CMM) is calculated by QPSO and the measurement uncertainty is evaluated by MCM on the basis of analyzing the uncertainty contributors. The results show that the uncertainty directly influences the product judgment result. Therefore it is scientific and reasonable to consider the influence of the uncertainty in judging whether the parts are accepted or rejected, especially for those located in the uncertainty zone. The proposed method is especially suitable when the PDF of the measurand cannot adequately be approximated by a Gaussian distribution or a scaled and shifted t-distribution and the measurement model is non-linear.

Wen, Xiulan; Xu, Youxiong; Li, Hongsheng; Wang, Fenglin; Sheng, Danghong

2012-09-01

393

NASA Astrophysics Data System (ADS)

Two ways of simulating statistically the propagation of laser radiation in dispersive media by the Monte-Carlo method are compared. The first approach can be called corpuscular because it is based on the calculation of random photon trajectories, while the second one can be referred to as the wave approach because it is based on the calculation of characteristics of random wave fields. It is shown that, although these approaches are based on different physical concepts of radiation scattering by particles, they yield almost equivalent results for the intensity of a restricted beam in a dispersive medium. However, there exist some differences. The corpuscular Monte-Carlo method does not reproduce the diffraction divergence of the beam, which can be taken into account by introducing the diffraction factor. The wave method does not consider backscattering, which corresponds to the quasi-optical approximation.

Kandidov, V. P.; Militsin, V. O.; Bykov, A. V.; Priezzhev, A. V.

2006-11-01

394

This study refines risk analysis procedures for trichloroethylene (TCE) using a physiologically based pharmacokinetic (PBPK) model in conjunction with the Monte Carlo method. The Monte Carlo method is used to generate random sets of model parameters, based on the mean, variance, and distribution types. The procedure generates a range of exposure values for human excess lifetime cancer risk of lxl0 (exp-6), based on the upper and lower bounds and the mean of a 95% confidence interval. Risk ranges were produced for both ingestion and inhalation exposures. Results are presented in a graphical format to reduce reliance on qualitative discussions of uncertainty. A sensitivity analysis of the model was also performed. This method produced acceptable TCE exposures, for total amount TCE metabolized, greater than the Environmental Protection Agency's (EPA) by a factor of 23 for inhalation and a factor of 1.6 for ingestion. Sensitive parameters identified were the elimination rate constant, alveolar ventilation rate, and cardiac output. This procedure quantifies the uncertainty related to natural variations in parameter values. Its incorporation into risk assessment could be used to promulgate, and better present, more realistic standards.... Risk analysis, Physiologically based pharmacokinetics, Pbpk, Trichloroethylene, Monte carlo method.

Cronin, W.J.; Oswald, E.J.

1993-09-01

395

NASA Astrophysics Data System (ADS)

The quantum Drude oscillator (QDO) model, which allows many-body polarization and dispersion to be treated both on an equal footing and beyond the dipole limit, is investigated using two approaches to the linear scaling diffusion Monte Carlo (DMC) technique. The first is a general purpose norm-conserving DMC (NC-DMC) method wherein the number of walkers, N , remains strictly constant thereby avoiding the sudden death or explosive growth of walker populations with an error that vanishes as O(N-1) in the absence of weights. As NC-DMC satisfies detailed balance, a phase space can be defined that permits both an exact trajectory weighting and a fast mean-field trajectory weighting technique to be constructed which can eliminate or reduce the population bias, respectively. The second is a many-body diagrammatic expansion for trial wave functions in systems dominated by strong on-site harmonic coupling and a dense matrix of bilinear coupling constants such as the QDO in the dipole limit; an approximate trial function is introduced to treat two-body interactions outside the dipole limit. Using these approaches, high accuracy is achieved in studies of the fcc-solid phase of the highly polarizable atom, xenon, within the QDO model. It is found that 200 walkers suffice to generate converged results for systems as large as 500 atoms. The quality of QDO predictions compared to experiment and the ability to generate these predictions efficiently demonstrate the feasibility of employing the QDO approach to model long-range forces.

Jones, Andrew; Thompson, Andrew; Crain, Jason; Müser, Martin H.; Martyna, Glenn J.

2009-04-01

396

NASA Astrophysics Data System (ADS)

The vector Monte-Carlo method is developed and applied to polarisation optical coherence tomography. The basic principles of simulation of the propagation of polarised electromagnetic radiation with a small coherence length are considered under conditions of multiple scattering. The results of numerical simulations for Rayleigh scattering well agree with the Milne solution generalised to the case of an electromagnetic field and with theoretical calculations in the diffusion approximation.

Churmakov, D. Yu; Kuz'min, V. L.; Meglinskii, I. V.

2006-11-01

397

The goal of this work was to develop an improved Monte Carlo method and implement a computer code for performing automatic integration of multidimensional integrals of the form ..integral..f(X)dX over a closed region in k-dimensional Euclidean space, where X is a point in the space and DX = dxādxā...dx\\/sub k\\/. The scheme is ''automatic'' in the sense that it returns

Yuen

1977-01-01

398

NASA Astrophysics Data System (ADS)

We prove the limit theorem for life time distribution connected with reliability systems when their life time is a Pascal Convolution of independent and identically distributed random variables. We show that, in some conditions, such distributions may be approximated by means of Erlang distributions. As a consequnce, survival functions for such systems may be, respectively, approximated by Erlang survival functions. By using Monte Carlo method we experimantally confirm the theoretical results of our theorem.

Gheorghe, Munteanu Bogdan; Alexei, Leahu; Sergiu, Cataranciuc

2013-09-01

399

We used a Monte-Carlo method to calculate the size distribution of alpha-recoil tracks in phlogopite. The calculations span the age range 01Ma and the range of Th\\/U-ratios encountered in phlogopites of different origin. The track-size distribution is bimodal, with a narrow maximum centred on ?30nm and a broader maximum centred on ?110nm. The bimodal distribution is a consequence of the

K. Stübner; R. C. Jonckheere

2006-01-01

400

NASA Astrophysics Data System (ADS)

A software library is presented for the polynomial expansion method (PEM) of the density of states (DOS) introduced in [Y. Motome, N. Furukawa, J. Phys. Soc. Japan 68 (1999) 3853; N. Furukawa, Y. Motome, H. Nakata, Comput. Phys. Comm. 142 (2001) 410]. The library provides all necessary functions for the use of the PEM and its truncated version (TPEM) in a model independent way. The PEM/TPEM replaces the exact diagonalization of the one electron sector in models for fermions coupled to classical fields. The computational cost of the algorithm is O(N)with N the number of lattice sitesfor the TPEM [N. Furukawa, Y. Motome, J. Phys. Soc. Japan 73 (2004) 1482] which should be contrasted with the computational cost of the diagonalization technique that scales as O(N4). The method is applied for the first time to a double exchange model with finite Hund coupling and also to diluted spin fermion models. Catalogue identifier: ADVK Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVK Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 1707 No. of bytes in distributed program, including test data, etc.: 13?644 Distribution format:tar.gz Operating system:Linux, UNIX Number of files:4 plus 1 test program Programming language used:C Computer:PC Nature of the physical problem:The study of correlated electrons coupled to classical fields appears in the treatment of many materials of much current interest in condensed matter theory, e.g., manganites, diluted magnetic semiconductors and high temperature superconductors among others. Method of solution: Typically an exact diagonalization of the electronic sector is performed in this type of models for each configuration of classical fields, which are integrated using a classical Monte Carlo algorithm. A polynomial expansion of the density of states is able to replace the exact diagonalization, decreasing the computational complexity of the problem from O(N4) to O(N) and allowing for the study of larger lattices and more complex and realistic systems.

Alvarez, G.; ?en, C.; Furukawa, N.; Motome, Y.; Dagotto, E.

2005-05-01

401

Medical accelerators operating above 10MV are a source of undesirable neutron radiations which contaminate the therapeutic photon beam. These photoneutrons can also generate secondary gamma rays which increases undesirable dose to the patient body and to personnel and general public. In this study, the Monte Carlo N-Particle MCNP5 code has been used to model the radiotherapy room of a medical

J. Ghassoun; N. Senhou; A. Jehouani

2011-01-01

402

Numerical simulation of power varying with time detected by fission chamber was performed using the continuous-energy Monte Carlo code MCNP4B to comprehend time delay of neutron detection in power burst experiments arranged for systems incorporating water reflector as well as devoid of reflector in the Transient Experiment Critical Facility (TRACY). The simulation indicated that power generation in core during an

Hiroshi YANAGISAWA; Akio OHNO

2002-01-01

403

Numerical simulation of power varying with time detected by fission chamber was performed using the continuous- energy Monte Carlo code MCNP4B to comprehend time delay of neutron detection in power burst experiments arranged for systems incorporating water reflector as well as devoid of reflector in the Transient Experiment Critical Facility (TRACY). The simulation indicated that power generation in core during

Hiroshi YANAGISAWA; Akio OHNO

2002-01-01

404

National Technical Information Service (NTIS)

By the help of a Monte-Carlo program the dose that single organs, organ groups and bigger or smaller parts of body would receive on an average, caused by an irradiation definitely fixed by the geometry of irradiation and photon energy, can be determined. ...

R. Kramer M. Zankl G. Williams G. Drexler

1982-01-01

405

NASA Astrophysics Data System (ADS)

The rarefied flow of nitrogen with speed ratio (mean speed over most probable speed) of S=2,5,10, pressure of 10.132 kPa into rectangular nanochannels with height of 100, 500, and 1000 nm is investigated using a three-dimensional, unstructured, direct simulation Monte Carlo method. The parametric computational investigation considers rarefaction effects with Knudsen number Kn=0.481,0.962,4.81, geometric effects with nanochannel aspect ratios of (L/H) from AR=1,10,100, and back-pressure effects with imposed pressures from 0 to 200 kPa. The computational domain features a buffer region upstream of the inlet and the nanochannel walls are assumed to be diffusively reflecting at the free stream temperature of 273 K. The flow analysis is based on the phase space distributions while macroscopic flow variables sampled in cells along the centerline are used to corroborate the microscopic analysis. The phase-space distributions show the formation of a disturbance region ahead of the inlet due to slow particles backstreaming through the inlet and the formation of a density enhancement with its maximum inside the nanochannel. Velocity phase-space distributions show a low-speed particle population generated inside the nanochannel due to wall collisions which is superimposed with the free stream high-speed population. The mean velocity decreases, while the number density increases in the buffer region. The translational temperature increases in the buffer region and reaches its maximum near the inlet. For AR=10,100 nanochannels the gas reaches near equilibrium with the wall temperature. The heat transfer rate is largest near the inlet region where nonequilibrium effects are dominant. For Kn=0.481,0.962,4.81, vacuum back pressure, and AR=1, the nanoflow is supersonic throughout the nanochannel, while for AR=10,100, the nanoflow is subsonic at the inlet and becomes sonic at the outlet. For Kn=0.962, AR=1, and imposed back pressure of 120 and 200 kPa, the nanoflow becomes subsonic at the outlet. For Kn=0.962 and AR=10, the outlet pressure nearly matches the imposed back pressure with the nanoflow becoming sonic at 40 kPa and subsonic at 100 kPa. Heat transfer rates at the inlet and mass flow rates at the outlet are in good agreement with those obtained from theoretical free-molecular models. The flows in these nanochannels share qualitatively characteristics found in microflows and continuum compressible flows in channels with friction and heat loss.

Gatsonis, Nikolaos A.; Al-Kouz, Wael G.; Chamberlin, Ryan E.

2010-03-01

406

Monte Carlo Simulation for Perusal and Practice.

ERIC Educational Resources Information Center

|The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic

Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

407

\\u000a VMC-dc is a computer program that simulates the irradiation of the human body by external sources. It uses a voxel phantom\\u000a produced at Yale University and the Monte Carlo technique to simulate the emission of photons by a point, ground, cloud source\\u000a or X-ray source. It then transports the photons through the human body phantom and calculates the dose to

S. A. Natouh

408

The correction of a thermal model for a thermally controlled satellite in ground test conditions is studied using a Monte\\u000a Carlo hybrid algorithm. First, the global and local parameters are summarized according to sensitivity analyses on uncertain\\u000a parameters, and then the model correction is treated as a parameter optimization problem to be solved with a hybrid algorithm.\\u000a Finally, the correction

WenLong Cheng; Na Liu; Zhi Li; Qi Zhong; AiMing Wang; ZhiMin Zhang; ZongBo He

2011-01-01

409

In order to use particle-in-cell (PIC) simulation codes for modeling collisional plasmas and self-sustained discharges, it is necessary to add interactions between charged and neutral particles. In conventional Monte Carlo schemes the time or distance between collisions for each particle is calculated using random numbers. This procedure allows for efficient algorithms but is not compatible with PIC simulations where the

V. Vahedi; M. Surendra

1995-01-01

410

In a recent paper, Watanabe, {ital et. al.} used direct simulation Monte Carlo to study Rayleigh-B{acute e}nard convection. They reported that, using stress-free boundary conditions, the onset of convection in the simulation occurred at a Rayleigh number much larger than the critical Rayleigh number predicted by linear stability analysis. We show that the source of their discrepancy is their failure to include the temperature jump effect in the calculation of Rayleigh number.

Garcia, A.L. [Lawrence Livermore National Lab., CA (United States); Baras, F.; Mansour, M.M. [Universite Libre de Bruxelles (Belgium)

1994-06-30

411

NASA Astrophysics Data System (ADS)

An NaI(Tl) multidetector layout combined with the use of Monte Carlo (MC) calculations and artificial neural networks(ANN) is proposed to assess the radioactive contamination of urban and semi-urban environment surfaces. A very simple urban environment like a model street composed of a wall on either side and the ground surface was the study case. A layout of four NaI(Tl) detectors was used, and the data corresponding to the response of the detectors were obtained by the Monte Carlo method. Two additional data sets with random values for the contamination and for detectors response were also produced to test the ANNs. For this work, 18 feedforward topologies with backpropagation learning algorithm ANNs were chosen and trained. The results showed that some trained ANNs were able to accurately predict the contamination on the three urban surfaces when su