Science.gov

Sample records for carlo method mcnp4c

  1. a New Method for Neutron Capture Therapy (nct) and Related Simulation by MCNP4C Code

    NASA Astrophysics Data System (ADS)

    Shirazi, Mousavi; Alireza, Seyed; Ali, Taheri

    2010-01-01

    Neutron capture therapy (NCT) is enumerated as one of the most important methods for treatment of some strong maladies among cancers in medical science thus is unavoidable controlling and protecting instances in use of this science. Among of treatment instances of this maladies with use of nuclear medical science is use of neutron therapy that is one of the most important and effective methods in treatment of cancers. But whereas fast neutrons have too destroyer effects and also sake of protection against additional absorbed energy (absorbed dose) by tissue during neutron therapy and also naught damaging to rest of healthy tissues, should be measured absorbed energy by tissue accurately, because destroyer effects of fast neutrons is almost quintuple more than gamma photons. In this article for neutron therapy act of male's liver has been simulated a system by the Monte Carlo method (MCNP4C code) and also with use of analytical method, thus absorbed dose by this tissue has been obtained for sources with different energies accurately and has been compared results of this two methods together.

  2. Monte Carlo method for calculating the radiation skyshine produced by electron accelerators

    NASA Astrophysics Data System (ADS)

    Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin

    2005-06-01

    Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.

  3. Calculation of Absorbed Dose in Target Tissue and Equivalent Dose in Sensitive Tissues of Patients Treated by BNCT Using MCNP4C

    NASA Astrophysics Data System (ADS)

    Zamani, M.; Kasesaz, Y.; Khalafi, H.; Pooya, S. M. Hosseini

    Boron Neutron Capture Therapy (BNCT) is used for treatment of many diseases, including brain tumors, in many medical centers. In this method, a target area (e.g., head of patient) is irradiated by some optimized and suitable neutron fields such as research nuclear reactors. Aiming at protection of healthy tissues which are located in the vicinity of irradiated tissue, and based on the ALARA principle, it is required to prevent unnecessary exposure of these vital organs. In this study, by using numerical simulation method (MCNP4C Code), the absorbed dose in target tissue and the equiavalent dose in different sensitive tissues of a patiant treated by BNCT, are calculated. For this purpose, we have used the parameters of MIRD Standard Phantom. Equiavelent dose in 11 sensitive organs, located in the vicinity of target, and total equivalent dose in whole body, have been calculated. The results show that the absorbed dose in tumor and normal tissue of brain equal to 30.35 Gy and 0.19 Gy, respectively. Also, total equivalent dose in 11 sensitive organs, other than tumor and normal tissue of brain, is equal to 14 mGy. The maximum equivalent doses in organs, other than brain and tumor, appear to the tissues of lungs and thyroid and are equal to 7.35 mSv and 3.00 mSv, respectively.

  4. A comparative study of the neutron flux spectra in the MNSR irradiation sites for the HEU and LEU cores using the MCNP4C code.

    PubMed

    Dawahra, S; Khattab, K; Saba, G

    2015-10-01

    A comparative study for fuel conversion from the HEU to LEU in the Miniature Neutron Source Reactor (MNSR) has been performed in this paper using the MCNP4C code. The neutron energy and lethargy flux spectra in the first inner and outer irradiation sites of the MNSR reactor for the existing HEU fuel (UAl4-Al, 90% enriched) and the potential LEU fuels (U3Si2-Al, U3Si-Al, U9Mo-Al, 19.75% enriched and UO2, 12.6% enriched) were investigated using the MCNP4C code. The neutron energy flux spectra for each group was calculated by dividing the neutron flux by the width of each energy group. The neutron flux spectra per unit lethargy was calculated by multiplying the neutron energy flux spectra for each energy group by the average energy of each group. The thermal neutron flux was calculated by summing the neutron fluxes from 0.0 to 0.625 eV, the fast neutron flux was calculated by summing the neutron fluxes from 0.5 MeV to 10 MeV for the existing HEU and potential LEU fuels. Good agreements have been noticed between the flux spectra for the potential LEU fuels and the existing HEU fuels with maximum relative differences less than 10% and 8% in the inner and outer irradiation sites. PMID:26142805

  5. Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Kalos, M. H.

    2010-01-01

    Computation now plays an essential role in science, especially in theoretical physics. The greater depth of our understanding of physical phenomena and the need to predict the behavior of complex devices demands a level of analysis that purely mathematical methods cannot meet. Monte Carlo methods offer some of the most powerful approaches to computation. They permit a simple transcription of a random process into a computer code. Alternatively, they give the only accurate approach to the many-dimensional problems of theoretical physics. I will describe a number of complementary approaches for Monte Carlo methods in treating diverse systems.

  6. Comparison of Monte Carlo simulations of photon/electron dosimetry in microscale applications.

    PubMed

    Joneja, O P; Negreanu, C; Stepanek, J; Chawl, R

    2003-06-01

    It is important to establish reliable calculational tools to plan and analyse representative microdosimetry experiments in the context of microbeam radiation therapy development. In this paper, an attempt has been made to investigate the suitability of the MCNP4C Monte Carlo code to adequately model photon/electron transport over micron distances. The case of a single cylindrical microbeam of 25-micron diameter incident on a water phantom has been simulated in detail with both MCNP4C and the code PSI-GEANT, for different incident photon energies, to get absorbed dose distributions at various depths, with and without electron transport being considered. In addition, dose distributions calculated for a single microbeam with a photon spectrum representative of the European Synchrotron Radiation Facility (ESRF) have been compared. Finally, a large number of cylindrical microbeams (a total of 2601 beams, placed on a 200-micron square pitch, covering an area of 1 cm2) incident on a water phantom have been considered to study cumulative radial dose distributions at different depths. From these distributions, ratios of peak (within the microbeam) to valley (mid-point along the diagonal connecting two microbeams) dose values have been determined. The various comparisons with PSI-GEANT results have shown that MCNP4C, with its high flexibility in terms of its numerous source and geometry description options, variance reduction methods, detailed error analysis, statistical checks and different tally types, can be a valuable tool for the analysis of microbeam experiments. PMID:12956187

  7. The MCNP-4C2 design of a two element photon/electron dosemeter that uses magnesium/copper/phosphorus doped lithium fluoride.

    PubMed

    Eakins, J S; Bartlett, D T; Hager, L G; Molinos-Solsona, C; Tanner, R J

    2008-01-01

    The Health Protection Agency is changing from using detectors made from 7LiF:Mg,Ti in its photon/electron personal dosemeters, to 7LiF:Mg,Cu,P. Specifically, the Harshaw TLD-700H card is to be adopted. As a consequence of this change, the dosemeter holder is also being modified not only to accommodate the shape of the new card, but also to optimize the photon and electron response characteristics of the device. This redesign process was achieved using MCNP-4C2 and the kerma approximation, electron range/energy tables with additional electron transport calculations, and experimental validation, with different potential filters compared; the optimum filter studied was a polytetrafluoroethylene disc of diameter 18 mm and thickness 4.3 mm. Calculated relative response characteristics at different angles of incidence and energies between 16 and 6174 keV are presented for this new dosemeter configuration and compared with measured type-test results. A new estimate for the energy-dependent relative light conversion efficiency appropriate to the 7LiF:Mg,Cu,P was also derived for determining the correct dosemeter response. PMID:17951605

  8. Shell model Monte Carlo methods

    SciTech Connect

    Koonin, S.E.; Dean, D.J.

    1996-10-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

  9. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  10. Modeling the effect in of criticality from changes in key parameters for small High Temperature Nuclear Reactor (U-BatteryTM) using MCNP4C

    NASA Astrophysics Data System (ADS)

    Pauzi, A. M.

    2013-06-01

    The neutron transport code, Monte Carlo N-Particle (MCNP) which was wellkown as the gold standard in predicting nuclear reaction was used to model the small nuclear reactor core called "U-batteryTM", which was develop by the University of Manchester and Delft Institute of Technology. The paper introduces on the concept of modeling the small reactor core, a high temperature reactor (HTR) type with small coated TRISO fuel particle in graphite matrix using the MCNPv4C software. The criticality of the core were calculated using the software and analysed by changing key parameters such coolant type, fuel type and enrichment levels, cladding materials, and control rod type. The criticality results from the simulation were validated using the SCALE 5.1 software by [1] M Ding and J L Kloosterman, 2010. The data produced from these analyses would be used as part of the process of proposing initial core layout and a provisional list of materials for newly design reactor core. In the future, the criticality study would be continued with different core configurations and geometries.

  11. Applications of Monte Carlo Methods in Calculus.

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    1990-01-01

    Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)

  12. Monte Carlo methods on advanced computer architectures

    SciTech Connect

    Martin, W.R.

    1991-12-31

    Monte Carlo methods describe a wide class of computational methods that utilize random numbers to perform a statistical simulation of a physical problem, which itself need not be a stochastic process. For example, Monte Carlo can be used to evaluate definite integrals, which are not stochastic processes, or may be used to simulate the transport of electrons in a space vehicle, which is a stochastic process. The name Monte Carlo came about during the Manhattan Project to describe the new mathematical methods being developed which had some similarity to the games of chance played in the casinos of Monte Carlo. Particle transport Monte Carlo is just one application of Monte Carlo methods, and will be the subject of this review paper. Other applications of Monte Carlo, such as reliability studies, classical queueing theory, molecular structure, the study of phase transitions, or quantum chromodynamics calculations for basic research in particle physics, are not included in this review. The reference by Kalos is an introduction to general Monte Carlo methods and references to other applications of Monte Carlo can be found in this excellent book. For the remainder of this paper, the term Monte Carlo will be synonymous to particle transport Monte Carlo, unless otherwise noted. 60 refs., 14 figs., 4 tabs.

  13. Monte Carlo N Particle code - Dose distribution of clinical electron beams in inhomogeneous phantoms.

    PubMed

    Nedaie, H A; Mosleh-Shirazi, M A; Allahverdi, M

    2013-01-01

    Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162

  14. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  15. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  16. Monte Carlo simulation for Neptun 10 PC medical linear accelerator and calculations of output factor for electron beam

    PubMed Central

    Bahreyni Toossi, Mohammad Taghi; Momennezhad, Mehdi; Hashemi, Seyed Mohammad

    2012-01-01

    Aim Exact knowledge of dosimetric parameters is an essential pre-requisite of an effective treatment in radiotherapy. In order to fulfill this consideration, different techniques have been used, one of which is Monte Carlo simulation. Materials and methods This study used the MCNP-4Cb to simulate electron beams from Neptun 10 PC medical linear accelerator. Output factors for 6, 8 and 10 MeV electrons applied to eleven different conventional fields were both measured and calculated. Results The measurements were carried out by a Wellhofler-Scanditronix dose scanning system. Our findings revealed that output factors acquired by MCNP-4C simulation and the corresponding values obtained by direct measurements are in a very good agreement. Conclusion In general, very good consistency of simulated and measured results is a good proof that the goal of this work has been accomplished. PMID:24377010

  17. An enhanced Monte Carlo outlier detection method.

    PubMed

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc. PMID:26226927

  18. An introduction to Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Walter, J.-C.; Barkema, G. T.

    2015-01-01

    Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.

  19. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  20. Calculating Pi Using the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Williamson, Timothy

    2013-11-01

    During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.

  1. Quantum Monte Carlo methods for nuclear physics

    DOE PAGESBeta

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  2. Quantum Monte Carlo methods for nuclear physics

    DOE PAGESBeta

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  3. Quantum Monte Carlo methods for nuclear physics

    NASA Astrophysics Data System (ADS)

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-07-01

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  4. Discrete range clustering using Monte Carlo methods

    NASA Technical Reports Server (NTRS)

    Chatterji, G. B.; Sridhar, B.

    1993-01-01

    For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.

  5. Accelerated Monte Carlo Methods for Coulomb Collisions

    NASA Astrophysics Data System (ADS)

    Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce

    2014-03-01

    We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.

  6. The Monte Carlo Method. Popular Lectures in Mathematics.

    ERIC Educational Resources Information Center

    Sobol', I. M.

    The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…

  7. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    SciTech Connect

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  8. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  9. A Particle Population Control Method for Dynamic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony

    2014-06-01

    A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems.

  10. An assessment of the MCNP4C weight window

    SciTech Connect

    Christopher N. Culbertson; John S. Hendricks

    1999-12-01

    A new, enhanced weight window generator suite has been developed for MCNP version 4C. The new generator correctly estimates importances in either a user-specified, geometry-independent, orthogonal grid or in MCNP geometric cells. The geometry-independent option alleviates the need to subdivide the MCNP cell geometry for variance reduction purposes. In addition, the new suite corrects several pathologies in the existing MCNP weight window generator. The new generator is applied in a set of five variance reduction problems. The improved generator is compared with the weight window generator applied in MCNP4B. The benefits of the new methodology are highlighted, along with a description of its limitations. The authors also provide recommendations for utilization of the weight window generator.

  11. Monte Carlo methods and applications in nuclear physics

    SciTech Connect

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  12. Neutron spectral unfolding using the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    O'Brien, Keran; Sanna, Robert

    A solution to the neutron unfolding problem, without approximation or a priori assumptions as to spectral shape, has been devised, based on the Monte Carlo method, and its rate of convergence derived. By application to synthesized measurements with controlled and varying levels of error, the effect of measurement error has been investigated. This Monte Carlo method has also been applied to experimental stray neutron data from measurements inside a reactor containment vessel.

  13. A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation

    SciTech Connect

    Li, Z.; Wang, K.

    2012-07-01

    Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)

  14. Study of the Transition Flow Regime using Monte Carlo Methods

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1999-01-01

    This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.

  15. Observations on variational and projector Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Umrigar, C. J.

    2015-10-01

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.

  16. Observations on variational and projector Monte Carlo methods.

    PubMed

    Umrigar, C J

    2015-10-28

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed. PMID:26520496

  17. Frequency domain optical tomography using a Monte Carlo perturbation method

    NASA Astrophysics Data System (ADS)

    Yamamoto, Toshihiro; Sakamoto, Hiroki

    2016-04-01

    A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.

  18. Multiple-time-stepping generalized hybrid Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  19. Multiple-time-stepping generalized hybrid Monte Carlo methods

    SciTech Connect

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  20. Bayesian Monte Carlo Method for Nuclear Data Evaluation

    SciTech Connect

    Koning, A.J.

    2015-01-15

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.

  1. Monte Carlo method for magnetic impurities in metals

    NASA Technical Reports Server (NTRS)

    Hirsch, J. E.; Fye, R. M.

    1986-01-01

    The paper discusses a Monte Carlo algorithm to study properties of dilute magnetic alloys; the method can treat a small number of magnetic impurities interacting wiith the conduction electrons in a metal. Results for the susceptibility of a single Anderson impurity in the symmetric case show the expected universal behavior at low temperatures. Some results for two Anderson impurities are also discussed.

  2. Accelerated Monte Carlo Methods for Coulomb Collisions

    NASA Astrophysics Data System (ADS)

    Rosin, Mark; Dimits, Andris; Ricketson, Lee; Caflisch, Russel; Cohen, Bruce

    2012-10-01

    As an alternative to binary-collision models for simulating Coulomb collisions in the Fokker-Planck limit, we present a new numerical higher-order-accurate time integration scheme for Langevin-equation-based collisions. A Taylor-series expansion of the stochastic differential equations is used to improve upon the standard Euler time integration. Additional Milstein terms arise in the time-discretization due to both the velocity dependence of the diffusion coefficients, and the aggregation of angular deflections. We introduce an accurate, easily computable direct sampling method for the multidimensional terms -- an approximation to the double integral over products of Gaussian random processes. Including these terms improves the strong convergence of the time integration of the particle trajectories from O(δt1/2) to O(δt). This is useful as a both a first step towards direct higher-order weak schemes (for computing average quantities), and as a key component in a ``multi-level'' scheme that returns a computationally efficient estimate of averaged quantities. The latter is maximally efficient, in the asymptotic sense, when used with Milstein terms, and therefore the optimal choice of multi-level scheme. We present results showing both the improved strong convergence of the new integration method, and the increased efficiency multi-level scheme.

  3. Monte Carlo methods for light propagation in biological tissues.

    PubMed

    Vinckenbosch, Laura; Lacaux, Céline; Tindel, Samy; Thomassin, Magalie; Obara, Tiphaine

    2015-11-01

    Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis-Hastings algorithm. The resulting estimating methods are then compared to the so-called Wang-Prahl (or Wang) method. Finally, the formal representation allows to derive a non-linear optimization algorithm close to Levenberg-Marquardt that is used for the estimation of the scattering and absorption coefficients of the tissue from measurements. PMID:26362232

  4. Quantifying the effect of anode surface roughness on diagnostic x-ray spectra using Monte Carlo simulation

    SciTech Connect

    Mehranian, A.; Ay, M. R.; Alam, N. Riyahi; Zaidi, H.

    2010-02-15

    Purpose: The accurate prediction of x-ray spectra under typical conditions encountered in clinical x-ray examination procedures and the assessment of factors influencing them has been a long-standing goal of the diagnostic radiology and medical physics communities. In this work, the influence of anode surface roughness on diagnostic x-ray spectra is evaluated using MCNP4C-based Monte Carlo simulations. Methods: An image-based modeling method was used to create realistic models from surface-cracked anodes. An in-house computer program was written to model the geometric pattern of cracks and irregularities from digital images of focal track surface in order to define the modeled anodes into MCNP input file. To consider average roughness and mean crack depth into the models, the surface of anodes was characterized by scanning electron microscopy and surface profilometry. It was found that the average roughness (R{sub a}) in the most aged tube studied is about 50 {mu}m. The correctness of MCNP4C in simulating diagnostic x-ray spectra was thoroughly verified by calling its Gaussian energy broadening card and comparing the simulated spectra with experimentally measured ones. The assessment of anode roughness involved the comparison of simulated spectra in deteriorated anodes with those simulated in perfectly plain anodes considered as reference. From these comparisons, the variations in output intensity, half value layer (HVL), heel effect, and patient dose were studied. Results: An intensity loss of 4.5% and 16.8% was predicted for anodes aged by 5 and 50 {mu}m deep cracks (50 kVp, 6 deg. target angle, and 2.5 mm Al total filtration). The variations in HVL were not significant as the spectra were not hardened by more than 2.5%; however, the trend for this variation was to increase with roughness. By deploying several point detector tallies along the anode-cathode direction and averaging exposure over them, it was found that for a 6 deg. anode, roughened by 50 {mu}m deep cracks, the reduction in exposure is 14.9% and 13.1% for 70 and 120 kVp tube voltages, respectively. For the evaluation of patient dose, entrance skin radiation dose was calculated for typical chest x-ray examinations. It was shown that as anode roughness increases, patient entrance skin dose decreases averagely by a factor of 15%. Conclusions: It was concluded that the anode surface roughness can have a non-negligible effect on output spectra in aged x-ray imaging tubes and its impact should be carefully considered in diagnostic x-ray imaging modalities.

  5. A separable shadow Hamiltonian hybrid Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A.

    2009-11-01

    Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).

  6. The All Particle Monte Carlo method: Atomic data files

    SciTech Connect

    Rathkopf, J.A.; Cullen, D.E.; Perkins, S.T.

    1990-11-06

    Development of the All Particle Method, a project to simulate the transport of particles via the Monte Carlo method, has proceeded on two fronts: data collection and algorithm development. In this paper we report on the status of the data libraries. The data collection is nearly complete with the addition of electron, photon, and atomic data libraries to the existing neutron, gamma ray, and charged particle libraries. The contents of these libraries are summarized.

  7. Bayesian Monte Carlo method for nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Koning, A. J.

    2015-12-01

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.

  8. Uncertainties in external dosimetry: analytical vs. Monte Carlo method.

    PubMed

    Behrens, R

    2010-03-01

    Over the years, the International Commission on Radiological Protection (ICRP) and other organisations have formulated recommendations regarding uncertainty in occupational dosimetry. The most practical and widely accepted recommendations are the trumpet curves. To check whether routine dosemeters comply with them, a Technical Report on uncertainties issued by the International Electrotechnical Commission (IEC) can be used. In this report, the analytical method is applied to assess the uncertainty of a dosemeter fulfilling an IEC standard. On the other hand, the Monte Carlo method can be used to assess the uncertainty. In this work, a direct comparison of the analytical and the Monte Carlo methods is performed using the same input data. It turns out that the analytical method generally overestimates the uncertainty by about 10-30 %. Therefore, the results often do not comply with the recommendations of the ICRP regarding uncertainty. The results of the more realistic uncertainty evaluation using the Monte Carlo method usually comply with the recommendations of the ICRP. This is confirmed by results seen in regular tests in Germany. PMID:19942627

  9. A new method for commissioning Monte Carlo treatment planning systems

    NASA Astrophysics Data System (ADS)

    Aljarrah, Khaled Mohammed

    2005-11-01

    The Monte Carlo method is an accurate method for solving numerical problems in different fields. It has been used for accurate radiation dose calculation for radiation treatment of cancer. However, the modeling of an individual radiation beam produced by a medical linear accelerator for Monte Carlo dose calculation, i.e., the commissioning of a Monte Carlo treatment planning system, has been the bottleneck for the clinical implementation of Monte Carlo treatment planning. In this study a new method has been developed to determine the parameters of the initial electron beam incident on the target for a clinical linear accelerator. The interaction of the initial electron beam with the accelerator target produces x-ray and secondary charge particles. After successive interactions in the linac head components, the x-ray photons and the secondary charge particles interact with the patient's anatomy and deliver dose to the region of interest. The determination of the initial electron beam parameters is important for estimating the delivered dose to the patients. These parameters, such as beam energy and radial intensity distribution, are usually estimated through a trial and error process. In this work an easy and efficient method was developed to determine these parameters. This was accomplished by comparing calculated 3D dose distributions for a grid of assumed beam energies and radii in a water phantom with measurements data. Different cost functions were studied to choose the appropriate function for the data comparison. The beam parameters were determined on the light of this method. Due to the assumption that same type of linacs are exactly the same in their geometries and only differ by the initial phase space parameters, the results of this method were considered as a source data to commission other machines of the same type.

  10. Quasicontinuum Monte Carlo: A method for surface growth simulations

    NASA Astrophysics Data System (ADS)

    Russo, G.; Sander, L. M.; Smereka, P.

    2004-03-01

    We introduce an algorithm for treating growth on surfaces which combines important features of continuum methods (such as the level-set method) and kinetic Monte Carlo (KMC) simulations. We treat the motion of adatoms in continuum theory, but attach them to islands one atom at a time. The technique is borrowed from the dielectric breakdown model. Our method allows us to give a realistic account of fluctuations in island shape, which is lacking in deterministic continuum treatments and which is an important physical effect. Our method should be most important for problems close to equilibrium where KMC becomes impractically slow.

  11. Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems

    NASA Astrophysics Data System (ADS)

    Slattery, Stuart R.

    This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It was found that for the current implementation of MCSA, both weak and strong scaling improved on that observed for production implementations of Krylov methods.

  12. Cluster Monte Carlo methods for the FePt Hamiltonian

    NASA Astrophysics Data System (ADS)

    Lyberatos, A.; Parker, G. J.

    2016-02-01

    Cluster Monte Carlo methods for the classical spin Hamiltonian of FePt with long range exchange interactions are presented. We use a combination of the Swendsen-Wang (or Wolff) and Metropolis algorithms that satisfies the detailed balance condition and ergodicity. The algorithms are tested by calculating the temperature dependence of the magnetization, susceptibility and heat capacity of L10-FePt nanoparticles in a range including the critical region. The cluster models yield numerical results in good agreement within statistical error with the standard single-spin flipping Monte Carlo method. The variation of the spin autocorrelation time with grain size is used to deduce the dynamic exponent of the algorithms. Our cluster models do not provide a more accurate estimate of the magnetic properties at equilibrium.

  13. Calculations of pair production by Monte Carlo methods

    SciTech Connect

    Bottcher, C.; Strayer, M.R.

    1991-01-01

    We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.

  14. Monte Carlo Methods and Applications for the Nuclear Shell Model

    SciTech Connect

    Dean, D.J.; White, J.A.

    1998-08-10

    The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sd-pf-shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems.

  15. MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS

    SciTech Connect

    R. ESTEP; ET AL

    2000-06-01

    Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.

  16. Novel extrapolation method in the Monte Carlo shell model

    SciTech Connect

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2010-12-15

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of {sup 56}Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g{sub 9/2}-shell calculation of {sup 64}Ge.

  17. On Monte Carlo Methods and Applications in Geoscience

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Blais, J.

    2009-05-01

    Monte Carlo methods are designed to study various deterministic problems using probabilistic approaches, and with computer simulations to explore much wider possibilities for the different algorithms. Pseudo- Random Number Generators (PRNGs) are based on linear congruences of some large prime numbers, while Quasi-Random Number Generators (QRNGs) provide low discrepancy sequences, both of which giving uniformly distributed numbers in (0,1). Chaotic Random Number Generators (CRNGs) give sequences of 'random numbers' satisfying some prescribed probabilistic density, often denser around the two corners of interval (0,1), but transforming this type of density to a uniform one is usually possible. Markov Chain Monte Carlo (MCMC), as indicated by its name, is associated with Markov Chain simulations. Basic descriptions of these random number generators will be given, and a comparative analysis of these four methods will be included based on their efficiencies and other characteristics. Some applications in geoscience using Monte Carlo simulations will be described, and a comparison of these algorithms will also be included with some concluding remarks.

  18. Extension of the fully coupled Monte Carlo/S sub N response matrix method to problems including upscatter and fission

    SciTech Connect

    Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )

    1991-01-01

    The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.

  19. On the efficiency of algorithms of Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Budak, V. P.; Zheltov, V. S.; Lubenchenko, A. V.; Shagalov, O. V.

    2015-11-01

    A numerical comparison of algorithms for solving the radiative transfer equation by the Monte-Carlo method is performed for the direct simulation and local estimations. The problems of radiative transfer through a turbid medium slab in the scalar and vector case is considered. The case of reflections from the boundaries of the medium is analyzed. The calculations are performed in a wide variation of parameters of the medium. It is shown that the calculation time with the same accuracy for the local estimation method is less than one - two orders of magnitude.

  20. Stabilized multilevel Monte Carlo method for stiff stochastic differential equations

    SciTech Connect

    Abdulle, Assyr Blumenthal, Adrian

    2013-10-15

    A multilevel Monte Carlo (MLMC) method for mean square stable stochastic differential equations with multiple scales is proposed. For such problems, that we call stiff, the performance of MLMC methods based on classical explicit methods deteriorates because of the time step restriction to resolve the fastest scales that prevents to exploit all the levels of the MLMC approach. We show that by switching to explicit stabilized stochastic methods and balancing the stabilization procedure simultaneously with the hierarchical sampling strategy of MLMC methods, the computational cost for stiff systems is significantly reduced, while keeping the computational algorithm fully explicit and easy to implement. Numerical experiments on linear and nonlinear stochastic differential equations and on a stochastic partial differential equation illustrate the performance of the stabilized MLMC method and corroborate our theoretical findings.

  1. Improved criticality convergence via a modified Monte Carlo iteration method

    SciTech Connect

    Booth, Thomas E; Gubernatis, James E

    2009-01-01

    Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.

  2. Variational Monte Carlo method for electron-phonon coupled systems

    NASA Astrophysics Data System (ADS)

    Ohgoe, Takahiro; Imada, Masatoshi

    2014-05-01

    We develop a variational Monte Carlo (VMC) method for electron-phonon coupled systems. The VMC method has been extensively used for investigating strongly correlated electrons over the last decades. However, its applications to electron-phonon coupled systems have been severely restricted because of its large Hilbert space. Here, we propose a variational wave function with a large number of variational parameters, which is suitable and tractable for systems with electron-phonon coupling. In the proposed wave function, we implement an unexplored electron-phonon correlation factor, which takes into account the effect of the entanglement between electrons and phonons. The method is applied to systems with diagonal electron-phonon interactions, i.e., interactions between charge densities and lattice displacements (phonons). As benchmarks, we compare VMC results with previous results obtained by the exact diagonalization, the Green function Monte Carlo method and the density matrix renormalization group for the Holstein and Holstein-Hubbard model. From these benchmarks, we show that the present method offers an efficient way to treat strongly coupled electron-phonon systems.

  3. EchoSeed Model 6733 Iodine-125 brachytherapy source: Improved dosimetric characterization using the MCNP5 Monte Carlo code

    SciTech Connect

    Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S.

    2012-08-15

    This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

  4. Direct simulation Monte Carlo method with a focal mechanism algorithm

    NASA Astrophysics Data System (ADS)

    Rachman, Asep Nur; Chung, Tae Woong; Yoshimoto, Kazuo; Yun, Sukyoung

    2015-01-01

    To simulate the observation of the radiation pattern of an earthquake, the direct simulation Monte Carlo (DSMC) method is modified by implanting a focal mechanism algorithm. We compare the results of the modified DSMC method (DSMC-2) with those of the original DSMC method (DSMC-1). DSMC-2 shows more or similarly reliable results compared to those of DSMC-1, for events with 12 or more recorded stations, by weighting twice for hypocentral distance of less than 80 km. Not only the number of stations, but also other factors such as rough topography, magnitude of event, and the analysis method influence the reliability of DSMC-2. The most reliable result by DSMC-2 is obtained by the best azimuthal coverage by the largest number of stations. The DSMC-2 method requires shorter time steps and a larger number of particles than those of DSMC-1 to capture a sufficient number of arrived particles in the small-sized receiver.

  5. Analysis of real-time networks with monte carlo methods

    NASA Astrophysics Data System (ADS)

    Mauclair, C.; Durrieu, G.

    2013-12-01

    Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.

  6. Application of Monte Carlo methods in tomotherapy and radiation biophysics

    NASA Astrophysics Data System (ADS)

    Hsiao, Ya-Yun

    Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published experimental and theoretical studies for 60Co gamma-rays and low-energy x-rays. The reported studies provide new information about the potential biological consequences of diagnostic x-rays and selected gamma-emitting radioisotopes used in brachytherapy for the treatment of cancer. The proposed methodology is computationally efficient and may also be useful in proton therapy, space applications or internal dosimetry.

  7. Fast and accurate Monte Carlo modeling of a kilovoltage X-ray therapy unit using a photon-source approximation for treatment planning in complex media

    PubMed Central

    Zeinali-Rafsanjani, B.; Mosleh-Shirazi, M. A.; Faghihi, R.; Karbasi, S.; Mosalaei, A.

    2015-01-01

    To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553

  8. Fast and accurate Monte Carlo modeling of a kilovoltage X-ray therapy unit using a photon-source approximation for treatment planning in complex media.

    PubMed

    Zeinali-Rafsanjani, B; Mosleh-Shirazi, M A; Faghihi, R; Karbasi, S; Mosalaei, A

    2015-01-01

    To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553

  9. Path-Integral Monte Carlo Methods for Ultrasmall Device Modeling

    NASA Astrophysics Data System (ADS)

    Register, Leonard Franklin, II

    Monte Carlo methods based on the Feynman path -integral (FPI) formulation of quantum mechanics are developed for modeling ultrasmall device structures. A brief introduction to pertinent aspects of the FPI formalism is given. A practical "path-integral Monte Carlo" (PIMC) method for modeling equilibrium properties in ultrasmall devices is described and used to perform representative calculations of equilibrium properties of carrier confined in ultrasmall device structures, absent carrier-phonon coupling. As a spinoff of the PIMC research but without employing the FPI formalism, calculations of carrier-localized-phonon scattering rates for single and double optical-phonon-mode alloy semiconductors are performed, without assuming any specific functional form or degree of phonon localization. With the help of insights gained in the analysis of alloys semiconductors, the influence functional (the mathematical device for including carrier-phonon coupling effects in the FPI formalism) for carrier-polar-optical-phonon coupling is reexamined and generalized for heterostructures. An improved numerical method for evaluating the influence functional is developed. As a representative example of PIMC analysis of carrier-phonon coupling effects, calculations of carrier self-energies (the energy shift of carriers that results from coupling to phonons) in single crystals and quantum wires as a function of temperature are performed, though using the less general form of the influence functional for convenience and to allow comparison to previous results. A PIMC method appropriate for studying short-time transient behavior of carriers in ultrasmall devices is developed, examples given and the method compared to other transient-time PIMC methods. Based on this work, recommendations for future research are made.

  10. Importance sampling based direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Vedula, Prakash; Otten, Dustin

    2010-11-01

    We propose a novel and efficient approach, termed as importance sampling based direct simulation Monte Carlo (ISDSMC), for prediction of nonequilibrium flows via solution of the Boltzmann equation. Besides leading to a reduction in computational cost, ISDSMC also results in a reduction in statistical scatter compared to conventional direct simulation Monte Carlo (DSMC) and hence appears to be potentially useful for prediction of a variety of flows, especially where the signal to noise ratio is small (e.g. microflows) . In this particle in cell approach, the computational particles are initially assigned weights (or importance) based on constraints on generalized moments of velocity. Solution of the Boltzmann equation is achieved by use of (i) a streaming operator which streams the computational particles and (ii) a collision operator where the representative collision pairs are selected stochastically based on particle weights via an acceptance-rejection algorithm. Performance of ISDSMC approach is evaluated using analysis of three canonical microflows, namely (i) thermal Couette flow, (ii) velocity-slip Couette flow and (iii) Poiseulle flow. Our results based on ISDSMC indicate good agreement with those obtained from conventional DSMC methods. The potential advantages of this (ISDSMC) approach to granular flows will also be demonstrated using simulations of homogeneous relaxation of a granular gas.

  11. Explicitly restarted Arnoldi's method for Monte Carlo nuclear criticality calculations

    NASA Astrophysics Data System (ADS)

    Conlin, Jeremy Lloyd

    A Monte Carlo implementation of explicitly restarted Arnoldi's method is developed for estimating eigenvalues and eigenvectors of the transport-fission operator in the Boltzmann transport equation. Arnoldi's method is an improvement over the power method which has been used for decades. Arnoldi's method can estimate multiple eigenvalues by orthogonalising the resulting fission sources from the application of the transport-fission operator. As part of implementing Arnoldi's method, a solution to the physically impossible---but mathematically real---negative fission sources is developed. The fission source is discretized using a first order accurate spatial approximation to allow for orthogonalization and normalization of the fission source required for Arnoldi's method. The eigenvalue estimates from Arnoldi's method are compared with published results for homogeneous, one-dimensional geometries, and it is found that the eigenvalue and eigenvector estimates are accurate within statistical uncertainty. The discretization of the fission sources creates an error in the eigenvalue estimates. A second order accurate spatial approximation is created to reduce the error in eigenvalue estimates. An inexact application of the transport-fission operator is also investigated to reduce the computational expense of estimating the eigenvalues and eigenvectors. The convergence of the fission source and eigenvalue in Arnoldi's method is analysed and compared with the power method. Arnoldi's method is superior to the power method for convergence of the fission source and eigenvalue because both converge nearly instantly for Arnoldi's method while the power method may require hundreds of iterations to converge. This is shown using both homogeneous and heterogeneous one-dimensional geometries with dominance ratios close to 1.

  12. Speckle intensity images of target based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wu, Ying-Li; Wu, Zhen-Sen

    2014-03-01

    Speckle intensity in the detector plane is deduced in the free-space optical system and imaging system based on Van Cittert—Zernike theorem. The speckle intensity images of plane target and conical target are obtained by using the Monte Carlo method and measured experimentally. The results show that when the range extent of target is smaller, the speckle size along the same direction become longer, and the speckle size increase with increasing incident light wavelengths. The speckle size increases and the speckle intensity images of target is closer to the actual object when the aperture scale augments. These findings are useful to access the target information by speckle in laser radar systems.

  13. Underwater Optical Wireless Channel Modeling Using Monte-Carlo Method

    NASA Astrophysics Data System (ADS)

    Saini, P. Sri; Prince, Shanthi

    2011-10-01

    At present, there is a lot of interest in the functioning of the marine environment. Unmanned or Autonomous Underwater Vehicles (UUVs or AUVs) are used in the exploration of the underwater resources, pollution monitoring, disaster prevention etc. Underwater, where radio waves do not propagate, acoustic communication is being used. But, underwater communication is moving towards Optical Communication which has higher bandwidth when compared to Acoustic Communication but has shorter range comparatively. Underwater Optical Wireless Communication (OWC) is mainly affected by the absorption and scattering of the optical signal. In coastal waters, both inherent and apparent optical properties (IOPs and AOPs) are influenced by a wide array of physical, biological and chemical processes leading to optical variability. The scattering effect has two effects: the attenuation of the signal and the Inter-Symbol Interference (ISI) of the signal. However, the Inter-Symbol Interference is ignored in the present paper. Therefore, in order to have an efficient underwater OWC link it is necessary to model the channel efficiently. In this paper, the underwater optical channel is modeled using Monte-Carlo method. The Monte Carlo approach provides the most general and most flexible technique for numerically solving the equations of Radiative transfer. The attenuation co-efficient of the light signal is studied as a function of the absorption (a) and scattering (b) coefficients. It has been observed that for pure sea water and for less chlorophyll conditions blue wavelength is less absorbed whereas for chlorophyll rich environment red wavelength signal is absorbed less comparative to blue and green wavelength.

  14. Hierarchical Monte Carlo methods for fractal random fields

    SciTech Connect

    Elliott, F.W. Jr.; Majda, A.J.; Horntrop, D.J.

    1995-11-01

    Two hierarchical Monte Carlo methods for the generation of self-similar fractal random fields are compared and contrasted. The first technique, successive random addition (SRA), is currently popular in the physics community. Despite the intuitive appeal of SRA, rigorous mathematical reasoning reveals that SRA cannot be consistent with any stationary power-law Gaussian random field for any Hurst exponent; furthermore, there is an inherent ratio of largest to smallest putative scaling constant necessarily exceeding a factor of 2 for a wide range of Hurst exponents H, with 0.30 < H < 0.85. Thus, SRA is inconsistent with a stationary power-law fractal random field and would not be useful for problems that do not utilize additional spatial averaging of the velocity field. The second hierarchial method for fractal random fields has recently been introduced by two of the authors and relies on a suitable explicit multiwavelet expansion (MWE) with high-moment cancellation. This method is described briefly, including a demonstration that, unlike SRA, MWE is consistent with a stationary power-law random field over many decades of scaling and has low variance.

  15. Markov chain Monte Carlo methods: an introductory example

    NASA Astrophysics Data System (ADS)

    Klauenberg, Katy; Elster, Clemens

    2016-02-01

    When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.

  16. Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination

    PubMed Central

    Liu, B; Xu, J; Liu, T; Ouyang, X

    2012-01-01

    Objective To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Methods Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or γ-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a 252Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV D–D neutron generator can create neutrons at up to 1013 n s−1 with current technology. All these enable an effective and low-cost method of killing anthrax spores. Results There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. Conclusion The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g 252Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV D–D neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV D–D neutron generator output >1013 n s−1 should be attainable in the near future. This indicates that we could use a D–D neutron generator to sterilise anthrax contamination within several seconds. PMID:22573293

  17. Seriation in paleontological data using markov chain Monte Carlo methods.

    PubMed

    Puolamäki, Kai; Fortelius, Mikael; Mannila, Heikki

    2006-02-01

    Given a collection of fossil sites with data about the taxa that occur in each site, the task in biochronology is to find good estimates for the ages or ordering of sites. We describe a full probabilistic model for fossil data. The parameters of the model are natural: the ordering of the sites, the origination and extinction times for each taxon, and the probabilities of different types of errors. We show that the posterior distributions of these parameters can be estimated reliably by using Markov chain Monte Carlo techniques. The posterior distributions of the model parameters can be used to answer many different questions about the data, including seriation (finding the best ordering of the sites) and outlier detection. We demonstrate the usefulness of the model and estimation method on synthetic data and on real data on large late Cenozoic mammals. As an example, for the sites with large number of occurrences of common genera, our methods give orderings, whose correlation with geochronologic ages is 0.95. PMID:16477311

  18. LISA data analysis using Markov chain Monte Carlo methods

    SciTech Connect

    Cornish, Neil J.; Crowder, Jeff

    2005-08-15

    The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low-frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50 000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analysis, and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we supercool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions.

  19. MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD

    SciTech Connect

    K. HANSON

    2001-02-01

    The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.

  20. An automated variance reduction method for global Monte Carlo neutral particle transport problems

    NASA Astrophysics Data System (ADS)

    Cooper, Marc Andrew

    A method to automatically reduce the variance in global neutral particle Monte Carlo problems by using a weight window derived from a deterministic forward solution is presented. This method reduces a global measure of the variance of desired tallies and increases its associated figure of merit. Global deep penetration neutron transport problems present difficulties for analog Monte Carlo. When the scalar flux decreases by many orders of magnitude, so does the number of Monte Carlo particles. This can result in large statistical errors. In conjunction with survival biasing, a weight window is employed which uses splitting and Russian roulette to restrict the symbolic weights of Monte Carlo particles. By establishing a connection between the scalar flux and the weight window, two important concepts are demonstrated. First, such a weight window can be constructed from a deterministic solution of a forward transport problem. Also, the weight window will distribute Monte Carlo particles in such a way to minimize a measure of the global variance. For Implicit Monte Carlo solutions of radiative transfer problems, an inefficient distribution of Monte Carlo particles can result in large statistical errors in front of the Marshak wave and at its leading edge. Again, the global Monte Carlo method is used, which employs a time-dependent weight window derived from a forward deterministic solution. Here, the algorithm is modified to enhance the number of Monte Carlo particles in the wavefront. Simulations show that use of this time-dependent weight window significantly improves the Monte Carlo calculation.

  1. SCALE Monte Carlo Eigenvalue Methods and New Advancements

    SciTech Connect

    Goluoglu, Sedat; Leppanen, Jaakko; Petrie Jr, Lester M; Dunn, Michael E

    2010-01-01

    SCALE code system is developed and maintained by Oak Ridge National Laboratory to perform criticality safety, reactor analysis, radiation shielding, and spent fuel characterization for nuclear facilities and transportation/storage package designs. SCALE is a modular code system that includes several codes which use either Monte Carlo or discrete ordinates solution methodologies for solving relevant neutral particle transport equations. This paper describes some of the key capabilities of the Monte Carlo criticality safety codes within the SCALE code system.

  2. Simple geometry optimization with Variational Quantum Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Nissenbaum, Dan

    2005-03-01

    Stochastic optimization methods may be combined with Quantum Monte Carlo (QMC) integration to obtain a computational scheme for treating many body wavefunctions suitable for addressing modern problems in nanoscale physics. In this connection, we are investigating the range of applicability of the Stochastic Gradient Approximation (SGA) technique [1]. The SGA possesses the important advantage that the updating of the electronic variational parameters and the nuclear coordinates can be carried out simultaneously and without an explicit determination of the total energy for each geometry. We present illustrative results using simple variational functions for describing the hydrogen molecule, the lithium dimer, and the neutral and charged Li4 clusters. We computed highly accurate potential energy surfaces on a fine grid in order to test the efficacy of the SGA in locating the energy minima in the parameter space. Work supported in part by the USDOE.[1] A. Harju, B. Barbiellini, S. Siljamä'ki, R.M. Nieminen, and G. Ortiz, Phys. Rev. Lett. 79, 1173 (1997).

  3. Spike Inference from Calcium Imaging Using Sequential Monte Carlo Methods

    PubMed Central

    Vogelstein, Joshua T.; Watson, Brendon O.; Packer, Adam M.; Yuste, Rafael; Jedynak, Bruno; Paninski, Liam

    2009-01-01

    Abstract As recent advances in calcium sensing technologies facilitate simultaneously imaging action potentials in neuronal populations, complementary analytical tools must also be developed to maximize the utility of this experimental paradigm. Although the observations here are fluorescence movies, the signals of interest—spike trains and/or time varying intracellular calcium concentrations—are hidden. Inferring these hidden signals is often problematic due to noise, nonlinearities, slow imaging rate, and unknown biophysical parameters. We overcome these difficulties by developing sequential Monte Carlo methods (particle filters) based on biophysical models of spiking, calcium dynamics, and fluorescence. We show that even in simple cases, the particle filters outperform the optimal linear (i.e., Wiener) filter, both by obtaining better estimates and by providing error bars. We then relax a number of our model assumptions to incorporate nonlinear saturation of the fluorescence signal, as well external stimulus and spike history dependence (e.g., refractoriness) of the spike trains. Using both simulations and in vitro fluorescence observations, we demonstrate temporal superresolution by inferring when within a frame each spike occurs. Furthermore, the model parameters may be estimated using expectation maximization with only a very limited amount of data (e.g., ∼5–10 s or 5–40 spikes), without the requirement of any simultaneous electrophysiology or imaging experiments. PMID:19619479

  4. Spike inference from calcium imaging using sequential Monte Carlo methods.

    PubMed

    Vogelstein, Joshua T; Watson, Brendon O; Packer, Adam M; Yuste, Rafael; Jedynak, Bruno; Paninski, Liam

    2009-07-22

    As recent advances in calcium sensing technologies facilitate simultaneously imaging action potentials in neuronal populations, complementary analytical tools must also be developed to maximize the utility of this experimental paradigm. Although the observations here are fluorescence movies, the signals of interest--spike trains and/or time varying intracellular calcium concentrations--are hidden. Inferring these hidden signals is often problematic due to noise, nonlinearities, slow imaging rate, and unknown biophysical parameters. We overcome these difficulties by developing sequential Monte Carlo methods (particle filters) based on biophysical models of spiking, calcium dynamics, and fluorescence. We show that even in simple cases, the particle filters outperform the optimal linear (i.e., Wiener) filter, both by obtaining better estimates and by providing error bars. We then relax a number of our model assumptions to incorporate nonlinear saturation of the fluorescence signal, as well external stimulus and spike history dependence (e.g., refractoriness) of the spike trains. Using both simulations and in vitro fluorescence observations, we demonstrate temporal superresolution by inferring when within a frame each spike occurs. Furthermore, the model parameters may be estimated using expectation maximization with only a very limited amount of data (e.g., approximately 5-10 s or 5-40 spikes), without the requirement of any simultaneous electrophysiology or imaging experiments. PMID:19619479

  5. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.

  6. Treatment planning aspects and Monte Carlo methods in proton therapy

    NASA Astrophysics Data System (ADS)

    Fix, Michael K.; Manser, Peter

    2015-05-01

    Over the last years, the interest in proton radiotherapy is rapidly increasing. Protons provide superior physical properties compared with conventional radiotherapy using photons. These properties result in depth dose curves with a large dose peak at the end of the proton track and the finite proton range allows sparing the distally located healthy tissue. These properties offer an increased flexibility in proton radiotherapy, but also increase the demand in accurate dose estimations. To carry out accurate dose calculations, first an accurate and detailed characterization of the physical proton beam exiting the treatment head is necessary for both currently available delivery techniques: scattered and scanned proton beams. Since Monte Carlo (MC) methods follow the particle track simulating the interactions from first principles, this technique is perfectly suited to accurately model the treatment head. Nevertheless, careful validation of these MC models is necessary. While for the dose estimation pencil beam algorithms provide the advantage of fast computations, they are limited in accuracy. In contrast, MC dose calculation algorithms overcome these limitations and due to recent improvements in efficiency, these algorithms are expected to improve the accuracy of the calculated dose distributions and to be introduced in clinical routine in the near future.

  7. Quantum Monte Carlo methods and lithium cluster properties

    SciTech Connect

    Owen, R.K.

    1990-12-01

    Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

  8. Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters

    SciTech Connect

    Owen, R.K.

    1990-12-01

    Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

  9. Latent uncertainties of the precalculated track Monte Carlo method

    SciTech Connect

    Renaud, Marc-André; Seuntjens, Jan; Roberge, David

    2015-01-15

    Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.

  10. Approximation of probability density functions by the Multilevel Monte Carlo Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Bierig, Claudio; Chernov, Alexey

    2016-06-01

    We develop a complete convergence theory for the Maximum Entropy method based on moment matching for a sequence of approximate statistical moments estimated by the Multilevel Monte Carlo method. Under appropriate regularity assumptions on the target probability density function, the proposed method is superior to the Maximum Entropy method with moments estimated by the Monte Carlo method. New theoretical results are illustrated in numerical examples.

  11. Backward and Forward Monte Carlo Method in Polarized Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Yong, Huang; Guo-Dong, Shi; Ke-Yong, Zhu

    2016-03-01

    In general, the Stocks vector cannot be calculated in reverse in the vector radiative transfer. This paper presents a novel backward and forward Monte Carlo simulation strategy to study the vector radiative transfer in the participated medium. A backward Monte Carlo process is used to calculate the ray trajectory and the endpoint of the ray. The Stocks vector is carried out by a forward Monte Carlo process. A one-dimensional graded index semi-transparent medium was presented as the physical model and the thermal emission consideration of polarization was studied in the medium. The solution process to non-scattering, isotropic scattering, and the anisotropic scattering medium, respectively, is discussed. The influence of the optical thickness and albedo on the Stocks vector are studied. The results show that the U, V-components of the apparent Stocks vector are very small, but the Q-component of the apparent Stocks vector is relatively larger, which cannot be ignored.

  12. Uncertainty analysis for fluorescence tomography with Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Reinbacher-Kstinger, Alice; Freiberger, Manuel; Scharfetter, Hermann

    2011-07-01

    Fluorescence tomography seeks to image an inaccessible fluorophore distribution inside an object like a small animal by injecting light at the boundary and measuring the light emitted by the fluorophore. Optical parameters (e.g. the conversion efficiency or the fluorescence life-time) of certain fluorophores depend on physiologically interesting quantities like the pH value or the oxygen concentration in the tissue, which allows functional rather than just anatomical imaging. To reconstruct the concentration and the life-time from the boundary measurements, a nonlinear inverse problem has to be solved. It is, however, difficult to estimate the uncertainty of the reconstructed parameters in case of iterative algorithms and a large number of degrees of freedom. Uncertainties in fluorescence tomography applications arise from model inaccuracies, discretization errors, data noise and a priori errors. Thus, a Markov chain Monte Carlo method (MCMC) was used to consider all these uncertainty factors exploiting Bayesian formulation of conditional probabilities. A 2-D simulation experiment was carried out for a circular object with two inclusions. Both inclusions had a 2-D Gaussian distribution of the concentration and constant life-time inside of a representative area of the inclusion. Forward calculations were done with the diffusion approximation of Boltzmann's transport equation. The reconstruction results show that the percent estimation error of the lifetime parameter is by a factor of approximately 10 lower than that of the concentration. This finding suggests that lifetime imaging may provide more accurate information than concentration imaging only. The results must be interpreted with caution, however, because the chosen simulation setup represents a special case and a more detailed analysis remains to be done in future to clarify if the findings can be generalized.

  13. Markov chain Monte Carlo posterior sampling with the Hamiltonian method.

    SciTech Connect

    Hanson, Kenneth M.

    2001-01-01

    A major advantage of Bayesian data analysis is that provides a characterization of the uncertainty in the model parameters estimated from a given set of measurements in the form of a posterior probability distribution. When the analysis involves a complicated physical phenomenon, the posterior may not be available in analytic form, but only calculable by means of a simulation code. In such cases, the uncertainty in inferred model parameters requires characterization of a calculated functional. An appealing way to explore the posterior, and hence characterize the uncertainty, is to employ the Markov Chain Monte Carlo technique. The goal of MCMC is to generate a sequence random of parameter x samples from a target pdf (probability density function), {pi}(x). In Bayesian analysis, this sequence corresponds to a set of model realizations that follow the posterior distribution. There are two basic MCMC techniques. In Gibbs sampling, typically one parameter is drawn from the conditional pdf at a time, holding all others fixed. In the Metropolis algorithm, all the parameters can be varied at once. The parameter vector is perturbed from the current sequence point by adding a trial step drawn randomly from a symmetric pdf. The trial position is either accepted or rejected on the basis of the probability at the trial position relative to the current one. The Metropolis algorithm is often employed because of its simplicity. The aim of this work is to develop MCMC methods that are useful for large numbers of parameters, n, say hundreds or more. In this regime the Metropolis algorithm can be unsuitable, because its efficiency drops as 0.3/n. The efficiency is defined as the reciprocal of the number of steps in the sequence needed to effectively provide a statistically independent sample from {pi}.

  14. Evaluation of path-history-based fluorescence Monte Carlo method for photon migration in heterogeneous media.

    PubMed

    Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming

    2014-12-29

    The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium. PMID:25607163

  15. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    SciTech Connect

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  16. Monte Carlo solution for uncertainty propagation in particle transport with a stochastic Galerkin method

    SciTech Connect

    Franke, B. C.; Prinja, A. K.

    2013-07-01

    The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)

  17. Analytic And Monte Carlo Study Of The Perturbation Factor kp For A Standard Of Dw Through An Ka Standard Ionization Chamber BEV-CC01

    NASA Astrophysics Data System (ADS)

    Vargas Verdesoto, M. X.; Álvarez Romero, J. T.

    2003-09-01

    To characterize an ionization chamber BEV-CC01 as a standard of absorbed dose to water Dw at SSDL-Mexico, the approach developed by the BIPM for 60Co gamma radiation, [1] has been chosen. This requires the estimation of a factor kp, which stems from the perturbation introduced by the presence of the ionization chamber in the water phantom, and due to finite size of the cavity. This factor is the product of four terms: ψw,c, (μen/ρ)w,c, (1 + μ'.ȳ)w,c and kcav. Two independent determinations are accomplished using a combination of the Monte Carlo code MCNP4C in ITS mode [2,3] and analytic methods: one kp∥=1.1626 ± uc=: 0.90% for the chamber axis parallel to the beam axis; and another kp =1.1079± uc=0.89% for the chamber axis perpendicular to the beam axis. The variance reduction techniques: splitting-Russian roulette, source biasing and forced photon collisions are employed in the simulations to improve the calculation efficiency. The energy fluence for the 60Co housing-source Picker C/9 is obtained by realistic Monte Carlo (MC) simulation, it is verified by comparison of MC calculated and measured beam output air kerma factors, and percent depth dose curves in water, PDD. This spectrum is considered as input energy for a point source (74% is from primary photons and the rest 26% is from scattered radiation) in the determination of the kp factors. Details of the calculations are given together with the theoretical basis of the ionometric standard employed.

  18. A Residual Monte Carlo Method for Spatially Discrete, Angularly Continuous Radiation Transport

    SciTech Connect

    Wollaeger, Ryan T.; Densmore, Jeffery D.

    2012-06-19

    Residual Monte Carlo provides exponential convergence of statistical error with respect to the number of particle histories. In the past, residual Monte Carlo has been applied to a variety of angularly discrete radiation-transport problems. Here, we apply residual Monte Carlo to spatially discrete, angularly continuous transport. By maintaining angular continuity, our method avoids the deficiencies of angular discretizations, such as ray effects. For planar geometry and step differencing, we use the corresponding integral transport equation to calculate an angularly independent residual from the scalar flux in each stage of residual Monte Carlo. We then demonstrate that the resulting residual Monte Carlo method does indeed converge exponentially to within machine precision of the exact step differenced solution.

  19. Efficient, Automated Monte Carlo Methods for Radiation Transport

    PubMed Central

    Kong, Rong; Ambrose, Martin; Spanier, Jerome

    2012-01-01

    Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872

  20. Efficient, automated Monte Carlo methods for radiation transport

    SciTech Connect

    Kong Rong; Ambrose, Martin; Spanier, Jerome

    2008-11-20

    Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.

  1. Suggesting a new design for multileaf collimator leaves based on Monte Carlo simulation of two commercial systems.

    PubMed

    Hariri, Sanaz; Shahriari, Majid

    2010-01-01

    Due to intensive use of multileaf collimators (MLCs) in clinics, finding an optimum design for the leaves becomes essential. There are several studies which deal with comparison of MLC systems, but there is no article with a focus on offering an optimum design using accurate methods like Monte Carlo. In this study, we describe some characteristics of MLC systems including the leaf tip transmission, beam hardening, leakage radiation and penumbra width for Varian and Elekta 80-leaf MLCs using MCNP4C code. The complex geometry of leaves in these two common MLC systems was simulated. It was assumed that all of the MLC systems were mounted on a Varian accelerator and with a similar thickness as Varian's and the same distance from the source. Considering the obtained results from Varian and Elekta leaf designs, an optimum design was suggested combining the advantages of three common MLC systems and the simulation results of this proposed one were compared with the Varian and the Elekta. The leakage from suggested design is 29.7% and 31.5% of the Varian and Elekta MLCs. In addition, other calculated parameters of the proposed MLC leaf design were better than those two commercial ones. Although it shows a wider penumbra in comparison with Varian and Elekta MLCs, taking into account the curved motion path of the leaves, providing a double focusing design will solve the problem. The suggested leaf design is a combination of advantages from three common vendors (Varian, Elekta and Siemens) which can show better results than each one. Using the results of this theoretical study may bring about superior practical outcomes. PMID:20717079

  2. Modification of codes NUALGAM and BREMRAD. Volume 3: Statistical considerations of the Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Firstenberg, H.

    1971-01-01

    The statistics are considered of the Monte Carlo method relative to the interpretation of the NUGAM2 and NUGAM3 computer code results. A numerical experiment using the NUGAM2 code is presented and the results are statistically interpreted.

  3. Frequency-domain deviational Monte Carlo method for linear oscillatory gas flows

    NASA Astrophysics Data System (ADS)

    Ladiges, Daniel R.; Sader, John E.

    2015-10-01

    Oscillatory non-continuum low Mach number gas flows are often generated by nanomechanical devices in ambient conditions. These flows can be simulated using a range of particle based Monte Carlo techniques, which in their original form operate exclusively in the time-domain. Recently, a frequency-domain weight-based Monte Carlo method was proposed [D. R. Ladiges and J. E. Sader, "Frequency-domain Monte Carlo method for linear oscillatory gas flows," J. Comput. Phys. 284, 351-366 (2015)] that exhibits superior statistical convergence when simulating oscillatory flows. This previous method used the Bhatnagar-Gross-Krook (BGK) kinetic model and contains a "virtual-time" variable to maintain the inherent time-marching nature of existing Monte Carlo algorithms. Here, we propose an alternative frequency-domain deviational Monte Carlo method that facilitates the use of a wider range of molecular models and more efficient collision/relaxation operators. We demonstrate this method with oscillatory Couette flow and the flow generated by an oscillating sphere, utilizing both the BGK kinetic model and hard sphere particles. We also discuss how oscillatory motion of arbitrary time-dependence can be simulated using computationally efficient parallelization. As in the weight-based method, this deviational frequency-domain Monte Carlo method is shown to offer improved computational speed compared to the equivalent time-domain technique.

  4. Krylov-Projected Quantum Monte Carlo Method.

    PubMed

    Blunt, N S; Alavi, Ali; Booth, George H

    2015-07-31

    We present an approach to the calculation of arbitrary spectral, thermal, and excited state properties within the full configuration interaction quzantum Monte Carlo framework. This is achieved via an unbiased projection of the Hamiltonian eigenvalue problem into a space of stochastically sampled Krylov vectors, thus, enabling the calculation of real-frequency spectral and thermal properties and avoiding explicit analytic continuation. We use this approach to calculate temperature-dependent properties and one- and two-body spectral functions for various Hubbard models, as well as isolated excited states in ab initio systems. PMID:26274406

  5. The MCNPX Monte Carlo Radiation Transport Code

    SciTech Connect

    Waters, Laurie S.; McKinney, Gregg W.; Durkee, Joe W.; Fensin, Michael L.; Hendricks, John S.; James, Michael R.; Johns, Russell C.; Pelowitz, Denise B.

    2007-03-19

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4c and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics, particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  6. APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula

    SciTech Connect

    Hwang, M.; Bae, S.; Chung, B. D.

    2012-07-01

    An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)

  7. Ultracold atoms at unitarity within quantum Monte Carlo methods

    SciTech Connect

    Morris, Andrew J.; Lopez Rios, P.; Needs, R. J.

    2010-03-15

    Variational and diffusion quantum Monte Carlo (VMC and DMC) calculations of the properties of the zero-temperature fermionic gas at unitarity are reported. Our study differs from earlier ones mainly in that we have constructed more accurate trial wave functions and used a larger system size, we have studied the dependence of the energy on the particle density and well width, and we have achieved much smaller statistical error bars. The correct value of the universal ratio of the energy of the interacting to that of the noninteracting gas, {xi}, is still a matter of debate. We find DMC values of {xi} of 0.4244(1) with 66 particles and 0.4339(1) with 128 particles. The spherically averaged pair-correlation functions, momentum densities, and one-body density matrices are very similar in VMC and DMC, which suggests that our results for these quantities are very accurate. We find, however, some differences between the VMC and DMC results for the two-body density matrices and condensate fractions, which indicates that these quantities are more sensitive to the quality of the trial wave function. Our best estimate of the condensate fraction of 0.51 is smaller than the values from earlier quantum Monte Carlo calculations.

  8. Monte Carlo Criticality Methods and Analysis Capabilities in SCALE

    SciTech Connect

    Goluoglu, Sedat; Petrie Jr, Lester M; Dunn, Michael E; Hollenbach, Daniel F; Rearden, Bradley T

    2011-01-01

    This paper describes the Monte Carlo codes KENO V.a and KENO-VI in SCALE that are primarily used to calculate multiplication factors and flux distributions of fissile systems. Both codes allow explicit geometric representation of the target systems and are used internationally for safety analyses involving fissile materials. KENO V.a has limiting geometric rules such as no intersections and no rotations. These limitations make KENO V.a execute very efficiently and run very fast. On the other hand, KENO-VI allows very complex geometric modeling. Both KENO codes can utilize either continuous-energy or multigroup cross-section data and have been thoroughly verified and validated with ENDF libraries through ENDF/B-VII.0, which has been first distributed with SCALE 6. Development of the Monte Carlo solution technique and solution methodology as applied in both KENO codes is explained in this paper. Available options and proper application of the options and techniques are also discussed. Finally, performance of the codes is demonstrated using published benchmark problems.

  9. Comparison of Monte Carlo methods for fluorescence molecular tomography—computational efficiency

    PubMed Central

    Chen, Jin; Intes, Xavier

    2011-01-01

    Purpose: The Monte Carlo method is an accurate model for time-resolved quantitative fluorescence tomography. However, this method suffers from low computational efficiency due to the large number of photons required for reliable statistics. This paper presents a comparison study on the computational efficiency of three Monte Carlo-based methods for time-domain fluorescence molecular tomography. Methods: The methods investigated to generate time-gated Jacobians were the perturbation Monte Carlo (pMC) method, the adjoint Monte Carlo (aMC) method and the mid-way Monte Carlo (mMC) method. The effects of the different parameters that affect the computation time and statistics reliability were evaluated. Also, the methods were applied to a set of experimental data for tomographic application. Results:In silico results establish that, the investigated parameters affect the computational time for the three methods differently (linearly, quadratically, or not significantly). Moreover, the noise level of the Jacobian varies when these parameters change. The experimental results in preclinical settings demonstrates the feasibility of using both aMC and pMC methods for time-resolved whole body studies in small animals within a few hours. Conclusions: Among the three Monte Carlo methods, the mMC method is a computationally prohibitive technique that is not well suited for time-domain fluorescence tomography applications. The pMC method is advantageous over the aMC method when the early gates are employed and large number of detectors is present. Alternatively, the aMC method is the method of choice when a small number of source-detector pairs are used. PMID:21992393

  10. Enhanced physics design with hexagonal repeated structure tools using Monte Carlo methods

    SciTech Connect

    Carter, L L; Lan, J S; Schwarz, R A

    1991-01-01

    This report discusses proposed new missions for the Fast Flux Test Facility (FFTF) reactor which involve the use of target assemblies containing local hydrogenous moderation within this otherwise fast reactor. Parametric physics design studies with Monte Carlo methods are routinely utilized to analyze the rapidly changing neutron spectrum. An extensive utilization of the hexagonal lattice within lattice capabilities of the Monte Carlo Neutron Photon (MCNP) continuous energy Monte Carlo computer code is applied here to solving such problems. Simpler examples that use the lattice capability to describe fuel pins within a brute force'' description of the hexagonal assemblies are also given.

  11. Mathematical simulations of photon interactions using Monte Carlo analysis to evaluate the uncertainty associated with in vivo K X-ray fluorescence measurements of stable lead in bone

    NASA Astrophysics Data System (ADS)

    Lodwick, Camille J.

    This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.

  12. Advanced computational methods for nodal diffusion, Monte Carlo, and S(sub N) problems

    NASA Astrophysics Data System (ADS)

    Martin, W. R.

    1993-01-01

    This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. An alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.

  13. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

    SciTech Connect

    Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Loeffler, Frank; Schnetter, Erik

    2012-08-20

    Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

  14. Monte Carlo Methods to Model Radiation Interactions and Induced Damage

    NASA Astrophysics Data System (ADS)

    Muñoz, Antonio; Fuss, Martina C.; Cortés-Giraldo, M. A.; Incerti, Sébastien; Ivanchenko, Vladimir; Ivanchenko, Anton; Quesada, J. M.; Salvat, Francesc; Champion, Christophe; Gómez-Tejedor, Gustavo García

    This review is devoted to the analysis of some Monte Carlo (MC) simulation programmes which have been developed to describe radiation interaction with biologically relevant materials. Current versions of the MC codes Geant4 (GEometry ANd Tracking 4), PENELOPE (PENetration and Energy Loss of Positrons and Electrons), EPOTRAN (Electron and POsitron TRANsport), and LEPTS (Low-Energy Particle Track Simulation) are described. Mean features of each model, as the type of radiation to consider, the energy range covered by primary and secondary particles, the type of interactions included in the simulation and the considered target geometries are discussed. Special emphasis lies on recent developments that, together with (still emerging) new databases that include adequate data for biologically relevant materials, bring us continuously closer to a realistic, physically meaningful description of radiation damage in biological tissues.

  15. Bold Diagrammatic Monte Carlo Method Applied to Fermionized Frustrated Spins

    NASA Astrophysics Data System (ADS)

    Kulagin, S. A.; Prokof'ev, N.; Starykh, O. A.; Svistunov, B.; Varney, C. N.

    2013-02-01

    We demonstrate, by considering the triangular lattice spin-1/2 Heisenberg model, that Monte Carlo sampling of skeleton Feynman diagrams within the fermionization framework offers a universal first-principles tool for strongly correlated lattice quantum systems. We observe the fermionic sign blessing—cancellation of higher order diagrams leading to a finite convergence radius of the series. We calculate the magnetic susceptibility of the triangular-lattice quantum antiferromagnet in the correlated paramagnet regime and reveal a surprisingly accurate microscopic correspondence with its classical counterpart at all accessible temperatures. The extrapolation of the observed relation to zero temperature suggests the absence of the magnetic order in the ground state. We critically examine the implications of this unusual scenario.

  16. Equivalence of four Monte Carlo methods for photon migration in turbid media.

    PubMed

    Sassaroli, Angelo; Martelli, Fabrizio

    2012-10-01

    In the field of photon migration in turbid media, different Monte Carlo methods are usually employed to solve the radiative transfer equation. We consider four different Monte Carlo methods, widely used in the field of tissue optics, that are based on four different ways to build photons' trajectories. We provide both theoretical arguments and numerical results showing the statistical equivalence of the four methods. In the numerical results we compare the temporal point spread functions calculated by the four methods for a wide range of the optical properties in the slab and semi-infinite medium geometry. The convergence of the methods is also briefly discussed. PMID:23201658

  17. Time-step limits for a Monte Carlo Compton-scattering method

    SciTech Connect

    Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B

    2009-01-01

    We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.

  18. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    SciTech Connect

    Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I

    2014-06-15

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.

  19. Spin-orbit interactions in electronic structure quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Melton, Cody A.; Zhu, Minyi; Guo, Shi; Ambrosetti, Alberto; Pederiva, Francesco; Mitas, Lubos

    2016-04-01

    We develop generalization of the fixed-phase diffusion Monte Carlo method for Hamiltonians which explicitly depends on particle spins such as for spin-orbit interactions. The method is formulated in a zero-variance manner and is similar to the treatment of nonlocal operators in commonly used static-spin calculations. Tests on atomic and molecular systems show that it is very accurate, on par with the fixed-node method. This opens electronic structure quantum Monte Carlo methods to a vast research area of quantum phenomena in which spin-related interactions play an important role.

  20. Macro-step Monte Carlo Methods and their Applications in Proton Radiotherapy and Optical Photon Transport

    NASA Astrophysics Data System (ADS)

    Jacqmin, Dustin J.

    Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990's for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of pre-computed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient. The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and proved to be just as accurate and more efficient. This work has the potential to accelerate light modeling for both photodynamic therapy and near-infrared spectroscopic imaging.

  1. A Monte Carlo method for the PDF (Probability Density Functions) equations of turbulent flow

    NASA Astrophysics Data System (ADS)

    Pope, S. B.

    1980-02-01

    The transport equations of joint probability density functions (pdfs) in turbulent flows are simulated using a Monte Carlo method because finite difference solutions of the equations are impracticable, mainly due to the large dimensionality of the pdfs. Attention is focused on equation for the joint pdf of chemical and thermodynamic properties in turbulent reactive flows. It is shown that the Monte Carlo method provides a true simulation of this equation and that the amount of computation required increases only linearly with the number of properties considered. Consequently, the method can be used to solve the pdf equation for turbulent flows involving many chemical species and complex reaction kinetics. Monte Carlo calculations of the pdf of temperature in a turbulent mixing layer are reported. These calculations are in good agreement with the measurements of Batt (1977).

  2. Estimation of ultrasonic beam parameters uncertainty from NDT immersion probes using Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Alvarenga, A. V.; Silva, C. E. R.; Costa-Felix, R. P. B.

    2012-05-01

    This paper presents the calculation of ultrasonic beam parameters focal distance, focal length, and focal widths on X-axis and Y-axis for non-destructive testing probes. The measurement uncertainties were estimated using Monte Carlo Method, and compared to those obtained using Guide to the expression of uncertainty in measurement (GUM) approach. The results show that the mean values and the combined uncertainties are identical, but the probabilistically symmetric 95 % coverage intervals determined on the basis of the GUM uncertainty framework were more conservative than the ones achieved using Monte Carlo Method. Moreover, the calculation of the numerical tolerance between the coverage intervals obtained from Monte Carlo Method and GUM shows they are statistically different. Hence, a more conservative uncertainty approach will be achieved using GUM uncertainty framework.

  3. Sampling uncertainty evaluation for data acquisition board based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ge, Leyi; Wang, Zhongyu

    2008-10-01

    Evaluating the data acquisition board sampling uncertainty is a difficult problem in the field of signal sampling. This paper analyzes the sources of dada acquisition board sampling uncertainty in the first, then introduces a simulation theory of dada acquisition board sampling uncertainty evaluation based on Monte Carlo method and puts forward a relation model of sampling uncertainty results, sampling numbers and simulation times. In the case of different sample numbers and different signal scopes, the author establishes a random sampling uncertainty evaluation program of a PCI-6024E data acquisition board to execute the simulation. The results of the proposed Monte Carlo simulation method are in a good agreement with the GUM ones, and the validities of Monte Carlo method are represented.

  4. Comparison of the Monte Carlo adjoint-weighted and differential operator perturbation methods

    SciTech Connect

    Kiedrowski, Brian C; Brown, Forrest B

    2010-01-01

    Two perturbation theory methodologies are implemented for k-eigenvalue calculations in the continuous-energy Monte Carlo code, MCNP6. A comparison of the accuracy of these techniques, the differential operator and adjoint-weighted methods, is performed numerically and analytically. Typically, the adjoint-weighted method shows better performance over a larger range; however, there are exceptions.

  5. A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha

    ERIC Educational Resources Information Center

    Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T.

    2010-01-01

    The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in…

  6. Development of Continuous-Energy Eigenvalue Sensitivity Coefficient Calculation Methods in the Shift Monte Carlo Code

    SciTech Connect

    Perfetti, Christopher M; Martin, William R; Rearden, Bradley T; Williams, Mark L

    2012-01-01

    Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.

  7. High-order path-integral Monte Carlo methods for solving quantum dot problems

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.

    2015-03-01

    The conventional second-order path-integral Monte Carlo method is plagued with the sign problem in solving many-fermion systems. This is due to the large number of antisymmetric free-fermion propagators that are needed to extract the ground state wave function at large imaginary time. In this work we show that optimized fourth-order path-integral Monte Carlo methods, which use no more than five free-fermion propagators, can yield accurate quantum dot energies for up to 20 polarized electrons with the use of the Hamiltonian energy estimator.

  8. Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

    SciTech Connect

    McKinley, M S; Brooks III, E D; Daffin, F

    2004-12-13

    Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.

  9. Improved methods of handling massive tallies in reactor Monte Carlo Code RMC

    SciTech Connect

    She, D.; Wang, K.; Sun, J.; Qiu, Y.

    2013-07-01

    Monte Carlo simulations containing a large number of tallies generally suffer severe performance penalties due to a significant amount of run time spent in searching for and scoring individual tally bins. This paper describes the improved methods of handling large numbers of tallies, which have been implemented in the RMC Monte Carlo code. The calculation results demonstrate that the proposed methods can considerably improve the tally performance when massive tallies are treated. In the calculated case with 6 million of tally regions, only 10% of run time is increased in each active cycle against each inactive cycle. (authors)

  10. Quantum-trajectory Monte Carlo method for study of electron-crystal interaction in STEM.

    PubMed

    Ruan, Z; Zeng, R G; Ming, Y; Zhang, M; Da, B; Mao, S F; Ding, Z J

    2015-07-21

    In this paper, a novel quantum-trajectory Monte Carlo simulation method is developed to study electron beam interaction with a crystalline solid for application to electron microscopy and spectroscopy. The method combines the Bohmian quantum trajectory method, which treats electron elastic scattering and diffraction in a crystal, with a Monte Carlo sampling of electron inelastic scattering events along quantum trajectory paths. We study in this work the electron scattering and secondary electron generation process in crystals for a focused incident electron beam, leading to understanding of the imaging mechanism behind the atomic resolution secondary electron image that has been recently achieved in experiment with a scanning transmission electron microscope. According to this method, the Bohmian quantum trajectories have been calculated at first through a wave function obtained via a numerical solution of the time-dependent Schrödinger equation with a multislice method. The impact parameter-dependent inner-shell excitation cross section then enables the Monte Carlo sampling of ionization events produced by incident electron trajectories travelling along atom columns for excitation of high energy knock-on secondary electrons. Following cascade production, transportation and emission processes of true secondary electrons of very low energies are traced by a conventional Monte Carlo simulation method to present image signals. Comparison of the simulated image for a Si(110) crystal with the experimental image indicates that the dominant mechanism of atomic resolution of secondary electron image is the inner-shell ionization events generated by a high-energy electron beam. PMID:26082190

  11. The Simulation-Tabulation Method for Classical Diffusion Monte Carlo

    NASA Astrophysics Data System (ADS)

    Hwang, Chi-Ok; Given, James A.; Mascagni, Michael

    2001-12-01

    Many important classes of problems in materials science and biotechnology require the solution of the Laplace or Poisson equation in disordered two-phase domains in which the phase interface is extensive and convoluted. Green's function first-passage (GFFP) methods solve such problems efficiently by generalizing the “walk on spheres” (WOS) method to allow first-passage (FP) domains to be not just spheres but a wide variety of geometrical shapes. (In particular, this solves the difficulty of slow convergence with WOS by allowing FP domains that contain patches of the phase interface.) Previous studies accomplished this by using geometries for which the Green's function was available in quasi-analytic form. Here, we extend these studies by using the simulation-tabulation (ST) method. We simulate and then tabulate surface Green's functions that cannot be obtained analytically. The ST method is applied to the Solc-Stockmayer model with zero potential, to the mean trapping rate of a diffusing particle in a domain of nonoverlapping spherical traps, and to the effective conductivity for perfectly insulating, nonoverlapping spherical inclusions in a matrix of finite conductivity. In all cases, this class of algorithms provides the most efficient methods known to solve these problems to high accuracy.

  12. Coupling Deterministic and Monte Carlo Transport Methods for the Simulation of Gamma-Ray Spectroscopy Scenarios

    SciTech Connect

    Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.

    2008-10-31

    Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.

  13. Power Analysis for Complex Mediational Designs Using Monte Carlo Methods

    ERIC Educational Resources Information Center

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2010-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…

  14. Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Gilbreth, C. N.; Alhassid, Y.

    2015-03-01

    Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.

  15. Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water

    ERIC Educational Resources Information Center

    Gergely, John Robert

    2009-01-01

    Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"

  16. An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho

    2001-01-01

    Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…

  17. An NCME Instructional Module on Estimating Item Response Theory Models Using Markov Chain Monte Carlo Methods

    ERIC Educational Resources Information Center

    Kim, Jee-Seon; Bolt, Daniel M.

    2007-01-01

    The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…

  18. Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water

    ERIC Educational Resources Information Center

    Gergely, John Robert

    2009-01-01

    Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"…

  19. Monte-Carlo methods make Dempster-Shafer formalism feasible

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik YA.; Bernat, Andrew; Borrett, Walter; Mariscal, Yvonne; Villa, Elsa

    1991-01-01

    One of the main obstacles to the applications of Dempster-Shafer formalism is its computational complexity. If we combine m different pieces of knowledge, then in general case we have to perform up to 2(sup m) computational steps, which for large m is infeasible. For several important cases algorithms with smaller running time were proposed. We prove, however, that if we want to compute the belief bel(Q) in any given query Q, then exponential time is inevitable. It is still inevitable, if we want to compute bel(Q) with given precision epsilon. This restriction corresponds to the natural idea that since initial masses are known only approximately, there is no sense in trying to compute bel(Q) precisely. A further idea is that there is always some doubt in the whole knowledge, so there is always a probability p(sub o) that the expert's knowledge is wrong. In view of that it is sufficient to have an algorithm that gives a correct answer a probability greater than 1-p(sub o). If we use the original Dempster's combination rule, this possibility diminishes the running time, but still leaves the problem infeasible in the general case. We show that for the alternative combination rules proposed by Smets and Yager feasible methods exist. We also show how these methods can be parallelized, and what parallelization model fits this problem best.

  20. A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations

    SciTech Connect

    Densmore, Jeffery D. . E-mail: jdd@lanl.gov; Urbatsch, Todd J. . E-mail: tmonster@lanl.gov; Evans, Thomas M. . E-mail: tme@lanl.gov; Buksas, Michael W. . E-mail: mwbuksas@lanl.gov

    2007-03-20

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the DDMC simulation, a quantity that is required to calculate correct fluid motion in coupled radiation-hydrodynamics problems. With a set of numerical examples, we demonstrate that our improved DDMC method is accurate and can provide efficiency gains of several orders of magnitude over standard Monte Carlo.

  1. Prediction of Quartz Crystal Microbalance Gas Sensor Responses Using Grand Canonical Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Nakamoto, Takamichi

    Our group has studied an odor sensing system using an array of Quartz Crystal Microbalance (QCM) gas sensors and neural-network pattern recognition. In this odor sensing system, it is important to know the properties of sensing films coated on Quartz Crystal Microbalance electrodes. These sensing films have not been experimentally characterized well enough to predict the sensor response. We have investigated the predictions of sensor responses using a computational chemistry method, Grand Canonical Monte Carlo (GCMC) simulations. We have successfully predicted the amount of sorption using this method. The GCMC method requires no empirical parameters, unlike many other prediction methods used for QCM based sensor response modeling. In this chapter, the Grand Canonical Monte Carlo method is reviewed to predict the response of QCM gas sensor, and the modeling results are compared with experiments.

  2. Evaluation of the Monte Carlo method (KTMAN-2) in fluoroscopic dosimetry and comparison with experiment

    NASA Astrophysics Data System (ADS)

    Kim, Minho; Lee, Hyounggun; Kim, Hyosim; Park, Hongmin; Lee, Wonho; Park, Sungho

    2014-03-01

    This study evaluated the Monte Carlo method for determining the dose calculation in fluoroscopy by using a realistic human phantom. The dose was calculated by using Monte Carlo N-particle extended (MCNPX) in simulations and was measured by using Korean Typical Man-2 (KTMAN-2) phantom in the experiments. MCNPX is a widely-used simulation tool based on the Monte-Carlo method and uses random sampling. KTMAN-2 is a virtual phantom written in MCNPX language and is based on the typical Korean man. This study was divided into two parts: simulations and experiments. In the former, the spectrum generation program (SRS-78) was used to obtain the output energy spectrum for fluoroscopy; then, each dose to the target organ was calculated using KTMAN-2 with MCNPX. In the latter part, the output of the fluoroscope was calibrated first and TLDs (Thermoluminescent dosimeter) were inserted in the ART (Alderson Radiation Therapy) phantom at the same places as in the simulation. Thus, the phantom was exposed to radiation, and the simulated and the experimental doses were compared. In order to change the simulation unit to the dose unit, we set the normalization factor (NF) for unit conversion. Comparing the simulated with the experimental results, we found most of the values to be similar, which proved the effectiveness of the Monte Carlo method in fluoroscopic dose evaluation. The equipment used in this study included a TLD, a TLD reader, an ART phantom, an ionization chamber and a fluoroscope.

  3. A Hamiltonian Monte-Carlo method for Bayesian inference of supermassive black hole binaries

    NASA Astrophysics Data System (ADS)

    Porter, Edward K.; Carré, Jérôme

    2014-07-01

    We investigate the use of a Hamiltonian Monte-Carlo to map out the posterior density function for supermassive black hole binaries. While previous Markov Chain Monte-Carlo (MCMC) methods, such as Metropolis-Hastings MCMC, have been successfully employed for a number of different gravitational wave sources, these methods are essentially random walk algorithms. The Hamiltonian Monte-Carlo treats the inverse likelihood surface as a ‘gravitational potential’ and by introducing canonical positions and momenta, dynamically evolves the Markov chain by solving Hamilton's equations of motion. This method is not as widely used as other MCMC algorithms due to the necessity of calculating gradients of the log-likelihood, which for most applications results in a bottleneck that makes the algorithm computationally prohibitive. We circumvent this problem by using accepted initial phase-space trajectory points to analytically fit for each of the individual gradients. Eliminating the waveform generation needed for the numerical derivatives reduces the total number of required templates for a {{10}^{6}} iteration chain from \\sim {{10}^{9}} to \\sim {{10}^{6}}. The result is in an implementation of the Hamiltonian Monte-Carlo that is faster, and more efficient by a factor of approximately the dimension of the parameter space, than a Hessian MCMC.

  4. A new Monte Carlo power method for the eigenvalue problem of transfer matrices

    SciTech Connect

    Koma, Tohru )

    1993-04-01

    The author proposes a new Monte Carlo method for calculating eigenvalues of transfer matrices leading to free energies and to correlation lengths of classical and quantum many-body systems. Generally, this method can be applied to the calculation of the maximum eigenvalue of a nonnegative matrix A such that all the matrix elements of A[sup k] are strictly positive for an integer k. This method is based on a new representation of the maximum eigenvalue of the matrix A as the thermal average of a certain observable of a many-body system. Therefore one can easily calculate the maximum eigenvalue of a transfer matrix leading to the free energy in the standard Monte Carlo simulations, such as the Metropolis algorithm. As test cases, the author calculates the free energies of the square-lattice Ising model and of the spin-1/2 XY Heisenberg chain. He also proves two useful theorems on the ergodicity in quantum Monte Carlo algorithms, or more generally, on the ergodicity of Monte Carlo algorithms using the new representation of the maximum eigenvalue of the matrix A. 39 refs., 5 figs., 2 tabs.

  5. A computationally efficient moment-preserving Monte Carlo electron transport method with implementation in Geant4

    NASA Astrophysics Data System (ADS)

    Dixon, D. A.; Prinja, A. K.; Franke, B. C.

    2015-09-01

    This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.

  6. A synergistic combination of the deterministic and Monte Carlo methods for double-heterogeneous problems

    SciTech Connect

    Kim, Y.; Shim, H. J.; Noh, T.

    2006-07-01

    To resolve the double-heterogeneity (DH) problem resulting from the TRISO fuel of high-temperature gas-cooled reactors (HTGRs), a synergistic combination of a deterministic method and the Monte Carlo method has been proposed. As the deterministic approach, the RPT (Reactivity-equivalent Physical Transformation) method is adopted. In the combined methodology, a reference k-infinite value is obtained by the Monte Carlo method for an initial state of a problem and it is used by the RPT method to transform the original DH problem into a conventional single-heterogeneous one, and the transformed problem is analyzed by the conventional deterministic methods. The combined methodology has been applied to the depletion analysis of typical HTGR fuels including both prismatic block and pebble. The reference solution is obtained using a Monte Carlo code MCCARD and the accuracy of the deterministic-only and the combined methods is evaluated. For the deterministic solution, the DRAGON and HELIOS codes were used. It has been shown that the combined method provides an accurate solution although the deterministic-only solution shows noticeable errors. For the pebble, the two deterministic codes cannot handle the DH problem. Nevertheless, we have shown that the solution of the DRAGON-MCCARD combined approach agrees well with the reference. (authors)

  7. A high-order photon Monte Carlo method for radiative transfer in direct numerical simulation

    SciTech Connect

    Wu, Y.; Modest, M.F.; Haworth, D.C. . E-mail: dch12@psu.edu

    2007-05-01

    A high-order photon Monte Carlo method is developed to solve the radiative transfer equation. The statistical and discretization errors of the computed radiative heat flux and radiation source term are isolated and quantified. Up to sixth-order spatial accuracy is demonstrated for the radiative heat flux, and up to fourth-order accuracy for the radiation source term. This demonstrates the compatibility of the method with high-fidelity direct numerical simulation (DNS) for chemically reacting flows. The method is applied to address radiative heat transfer in a one-dimensional laminar premixed flame and a statistically one-dimensional turbulent premixed flame. Modifications of the flame structure with radiation are noted in both cases, and the effects of turbulence/radiation interactions on the local reaction zone structure are revealed for the turbulent flame. Computational issues in using a photon Monte Carlo method for DNS of turbulent reacting flows are discussed.

  8. A Monte Carlo implementation of the predictor-corrector Quasi-Static method

    SciTech Connect

    Hackemack, M. W.; Ragusa, J. C.; Griesheimer, D. P.; Pounders, J. M.

    2013-07-01

    The Quasi-Static method (QS) is a useful tool for solving reactor transients since it allows for larger time steps when updating neutron distributions. Because of the beneficial attributes of Monte Carlo (MC) methods (exact geometries and continuous energy treatment), it is desirable to develop a MC implementation for the QS method. In this work, the latest version of the QS method known as the Predictor-Corrector Quasi-Static method is implemented. Experiments utilizing two energy-groups provide results that show good agreement with analytical and reference solutions. The method as presented can easily be implemented in any continuous energy, arbitrary geometry, MC code. (authors)

  9. Searching therapeutic agents for treatment of Alzheimer disease using the Monte Carlo method.

    PubMed

    Toropova, Mariya A; Toropov, Andrey A; Raška, Ivan; Rašková, Mária

    2015-09-01

    Quantitative structure - activity relationships (QSARs) for the pIC50 (binding affinity) of gamma-secretase inhibitors can be constructed with the Monte Carlo method using CORAL software (http://www.insilico.eu/coral). The considerable influence of the presence of rings of various types with respect to the above endpoint has been detected. The mechanistic interpretation and the domain of applicability of the QSARs are discussed. Methods to select new potential gamma-secretase inhibitors are suggested. PMID:26164035

  10. Frequency-domain Monte Carlo method for linear oscillatory gas flows

    NASA Astrophysics Data System (ADS)

    Ladiges, Daniel R.; Sader, John E.

    2015-03-01

    Gas flows generated by resonating nanoscale devices inherently occur in the non-continuum, low Mach number regime. Numerical simulations of such flows using the standard direct simulation Monte Carlo (DSMC) method are hindered by high statistical noise, which has motivated the development of several alternate Monte Carlo methods for low Mach number flows. Here, we present a frequency-domain low Mach number Monte Carlo method based on the Boltzmann-BGK equation, for the simulation of oscillatory gas flows. This circumvents the need for temporal simulations, as is currently required, and provides direct access to both amplitude and phase information using a pseudo-steady algorithm. The proposed method is validated for oscillatory Couette flow and the flow generated by an oscillating sphere. Good agreement is found with an existing time-domain method and accurate numerical solutions of the Boltzmann-BGK equation. Analysis of these simulations using a rigorous statistical approach shows that the frequency-domain method provides a significant improvement in computational speed.

  11. Development of perturbation Monte Carlo methods for polarized light transport in a discrete particle scattering model

    PubMed Central

    Nguyen, Jennifer; Hayakawa, Carole K.; Mourant, Judith R.; Venugopalan, Vasan; Spanier, Jerome

    2016-01-01

    We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides. PMID:27231642

  12. A Monte Carlo method for solving the one-dimensional telegraph equations with boundary conditions

    NASA Astrophysics Data System (ADS)

    Acebrón, Juan A.; Ribeiro, Marco A.

    2016-01-01

    A Monte Carlo algorithm is derived to solve the one-dimensional telegraph equations in a bounded domain subject to resistive and non-resistive boundary conditions. The proposed numerical scheme is more efficient than the classical Kac's theory because it does not require the discretization of time. The algorithm has been validated by comparing the results obtained with theory and the Finite-difference time domain (FDTD) method for a typical two-wire transmission line terminated at both ends with general boundary conditions. We have also tested transmission line heterogeneities to account for wave propagation in multiple media. The algorithm is inherently parallel, since it is based on Monte Carlo simulations, and does not suffer from the numerical dispersion and dissipation issues that arise in finite difference-based numerical schemes on a lossy medium. This allowed us to develop an efficient numerical method, capable of outperforming the classical FDTD method for large scale problems and high frequency signals.

  13. Estimation of magnetocaloric properties by using Monte Carlo method for AMRR cycle

    NASA Astrophysics Data System (ADS)

    Arai, R.; Tamura, R.; Fukuda, H.; Li, J.; Saito, A. T.; Kaji, S.; Nakagome, H.; Numazawa, T.

    2015-12-01

    In order to achieve a wide refrigerating temperature range in magnetic refrigeration, it is effective to layer multiple materials with different Curie temperatures. It is crucial to have a detailed understanding of physical properties of materials to optimize the material selection and the layered structure. In the present study, we discuss methods for estimating a change in physical properties, particularly the Curie temperature when some of the Gd atoms are substituted for non-magnetic elements for material design, based on Gd as a ferromagnetic material which is a typical magnetocaloric material. For this purpose, whilst making calculations using the S=7/2 Ising model and the Monte Carlo method, we made a specific heat measurement and a magnetization measurement of Gd-R alloy (R = Y, Zr) to compare experimental values and calculated ones. The results showed that the magnetic entropy change, specific heat, and Curie temperature can be estimated with good accuracy using the Monte Carlo method.

  14. Development of perturbation Monte Carlo methods for polarized light transport in a discrete particle scattering model.

    PubMed

    Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Venugopalan, Vasan; Spanier, Jerome

    2016-05-01

    We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides. PMID:27231642

  15. A comparison of generalized hybrid Monte Carlo methods with and without momentum flip

    SciTech Connect

    Akhmatskaya, Elena; Bou-Rabee, Nawaf; Reich, Sebastian

    2009-04-01

    The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.

  16. On a Monte Carlo method for measurement uncertainty evaluation and its implementation

    NASA Astrophysics Data System (ADS)

    Harris, P. M.; Cox, M. G.

    2014-08-01

    The ‘Guide to the Expression of Uncertainty in Measurement’ (GUM) provides a framework and procedure for evaluating and expressing measurement uncertainty. The procedure has two main limitations. Firstly, the way a coverage interval is constructed to contain values of the measurand with a stipulated coverage probability is approximate. Secondly, insufficient guidance is given for the multivariate case in which there is more than one measurand. In order to address these limitations, two specific guidance documents (or ‘Supplements to the GUM’) on, respectively, a Monte Carlo method for uncertainty evaluation (Supplement 1) and extensions to any number of measurands (Supplement 2) have been published. A further document on developing and using measurement models in the context of uncertainty evaluation (Supplement 3) is also planned, but not considered in this paper. An overview is given of these guidance documents. In particular, a Monte Carlo method, which is the focus of Supplements 1 and 2, is described as a numerical approach to implement the ‘propagation of distributions’ formulated using the ‘change of variables formula’. Although applying a Monte Carlo method is conceptually straightforward, some of the practical aspects of using the method are considered, such as the choice of the number of trials and ensuring an implementation is memory-efficient. General comments about the implications of using the method in measurement and calibration services, such as the need to achieve transferability of measurement results, are made.

  17. Monte Carlo method for photon heating using temperature-dependent optical properties.

    PubMed

    Slade, Adam Broadbent; Aguilar, Guillermo

    2015-02-01

    The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. PMID:25488656

  18. Computing the principal eigenelements of some linear operators using a branching Monte Carlo method

    SciTech Connect

    Lejay, Antoine Maire, Sylvain

    2008-12-01

    In earlier work, we developed a Monte Carlo method to compute the principal eigenvalue of linear operators, which was based on the simulation of exit times. In this paper, we generalize this approach by showing how to use a branching method to improve the efficacy of simulating large exit times for the purpose of computing eigenvalues. Furthermore, we show that this new method provides a natural estimation of the first eigenfunction of the adjoint operator. Numerical examples of this method are given for the Laplace operator and an homogeneous neutron transport operator.

  19. High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.

    2015-03-01

    In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.

  20. Path-integral Monte Carlo method for the local Z2 Berry phase.

    PubMed

    Motoyama, Yuichi; Todo, Synge

    2013-02-01

    We present a loop cluster algorithm Monte Carlo method for calculating the local Z(2) Berry phase of the quantum spin models. The Berry connection, which is given as the inner product of two ground states with different local twist angles, is expressed as a Monte Carlo average on the worldlines with fixed spin configurations at the imaginary-time boundaries. The "complex weight problem" caused by the local twist is solved by adopting the meron cluster algorithm. We present the results of simulation on the antiferromagnetic Heisenberg model on an out-of-phase bond-alternating ladder to demonstrate that our method successfully detects the change in the valence bond pattern at the quantum phase transition point. We also propose that the gauge-fixed local Berry connection can be an effective tool to estimate precisely the quantum critical point. PMID:23496453

  1. A step beyond the Monte Carlo method in economics: Application of multivariate normal distribution

    NASA Astrophysics Data System (ADS)

    Kabaivanov, S.; Malechkova, A.; Marchev, A.; Milev, M.; Markovska, V.; Nikolova, K.

    2015-11-01

    In this paper we discuss the numerical algorithm of Milev-Tagliani [25] used for pricing of discrete double barrier options. The problem can be reduced to accurate valuation of an n-dimensional path integral with probability density function of a multivariate normal distribution. The efficient solution of this problem with the Milev-Tagliani algorithm is a step beyond the classical application of Monte Carlo for option pricing. We explore continuous and discrete monitoring of asset path pricing, compare the error of frequently applied quantitative methods such as the Monte Carlo method and finally analyze the accuracy of the Milev-Tagliani algorithm by presenting the profound research and important results of Honga, S. Leeb and T. Li [16].

  2. Quantum World-line Monte Carlo Method with Non-binary Loops and Its Application

    NASA Astrophysics Data System (ADS)

    Harada, K.

    A quantum world-line Monte Carlo method for high-symmetrical quantum models is proposed. Firstly, based on a representation of a partition function using the Matsubara formula, the principle of quantum world-line Monte Carlo methods is briefly outlined and a new algorithm using non-binary loops is given for quantum models with high symmetry as SU(N). The algorithm is called non-binary loop algorithm because of non-binary loop updatings. Secondary, one example of our numerical studies using the non-binary loop updating is shown. It is the problem of the ground state of two-dimensional SU(N) anti-ferromagnets. Our numerical study confirms that the ground state in the small N(? 4) case is a magnetic ordered Neel state, but the one in the large N(? 5) case has no magnetic order, and it becomes a dimer state.

  3. Modeling of HF chemical laser flowfields using the Direct Simulation Monte Carlo method

    SciTech Connect

    McGregor, R.D.; Haflinger, D.E.; Lohn, P.D.; Sollee, J.L.; Behrens, H.W.; Duncan, W.A. U.S. Army, Missile Command, Redstone Arsenal, Al )

    1992-07-01

    A methodology, based on the Direct Simulation Monte Carlo (DSMC) approach, has been developed to screen injector concepts for high-energy chemical lasers. This methodology involves modeling the associated complex three-dimensional, reacting, multispecies flowfields and has been validated by comparison with experimental measurements. The method enables screening of new high-performance injector concepts and has the potential of greatly minimizing idea-to-implementation time and cost. 5 refs.

  4. Effects of treatment distance and field size on build-up characteristics of Monte Carlo calculated absorbed dose for electron irradiation.

    PubMed

    Lin, H; Wu, D S; Wu, A D

    2004-12-01

    Surface, build-up and depth dose characteristics of a monoenergetic electron point source simulated by Monte Carlo code MCNP4c for varying field size and SSD are extensively studied in this paper. MCNP4c (Monte Carlo N-Particle Transport Code System) has been extensively used in clinical dose simulation for its versatility and powerful geometrical coding tool. A sharp increase in PDD is seen with the Monte Carlo Modelling immediately at the surface within the first 0.2 mm. This effect cannot be easily measured by experimental instruments for electron contamination, and may lead to a clinical underdosing of the basal cell layer, which is one of the most radiation sensitive layers and the main target for skin carcinogenesis. A high percentage build-up dose for electron irradiation was shown. No significant effects in surface PDDs were modelled with different SSD values from 95 cm to 125 cm. Three depths were studied in detail, these being 0.05 mm, the lower depth of the basal cell layer; 0.95 mm, the lower depth of the dermal layer; and 0.95 cm, a position within the subcutaneous tissue. Results showed only small surface PDD differences were modelled for SSD variations from 95 cm to 125 cm and field sizes variation from the values between 5 cm and 10 cm squares to 25 cm. When the field side length is smaller than this, the surface dose shows an increasing trend by about 7% at 5 x 5 cm2. PMID:15712590

  5. Monte Carlo Methods in Materials Science Based on FLUKA and ROOT

    NASA Technical Reports Server (NTRS)

    Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor

    2003-01-01

    A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the CERN ALICE (A Large Ion Collisions Experiment) software team through an adaptation of their existing AliROOT (ALICE Using ROOT) architecture. In order to check our progress against actual data, we have chosen to simulate the ATIC14 (Advanced Thin Ionization Calorimeter) cosmic-ray astrophysics balloon payload as well as neutron fluences in the Mir spacecraft. This paper contains a summary of status of this project, and a roadmap to its successful completion.

  6. Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas

    2012-04-01

    Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.

  7. Methods for Monte Carlo simulation of the exospheres of the moon and Mercury

    NASA Astrophysics Data System (ADS)

    Hodges, R. R.

    1980-01-01

    A general form of the integral equation of exospheric transport on moon-like bodies is derived in a form that permits arbitrary specification of time varying physical processes affecting atom creation and annihilation, atom-regolith collisions, adsorption and desorption, and nonplanetocentric acceleration. Because these processes usually defy analytic representation, the Monte Carlo method of solution of the transport equation, the only viable alternative, is described in detail, with separate discussions of the methods of specification of physical processes as probabalistic functions. Proof of the validity of the Monte Carlo exosphere simulation method is provided in the form of a comparison of analytic and Monte Carlo solutions to three classical, and analytically tractable, exosphere problems. One of the key phenomena in moonlike exosphere simulations, the distribution of velocities of the atoms leaving a regolith, depends mainly on the nature of collisions of free atoms with rocks. It is shown that on the moon and Mercury, elastic collisions of helium atoms with a Maxwellian distribution of vibrating, bound atoms produce a nearly Maxwellian distribution of helium velocities, despite the absence of speeds in excess of escape in the impinging helium velocity distribution.

  8. A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

    SciTech Connect

    Keady, K P; Brantley, P

    2010-03-04

    Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model) for deep penetration problems such as examined in this paper. In this research, we investigate the application of a variant of the hybrid Monte Carlo-deterministic method proposed by Cooper and Larsen to global deep penetration problems involving binary stochastic media. To our knowledge, hybrid Monte Carlo-deterministic methods have not previously been applied to problems involving a stochastic medium. We investigate two approaches for computing the approximate deterministic estimate of the forward scalar flux distribution used to automatically generate the weight windows. The first approach uses the atomic mix approximation to the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. The second approach uses the Levermore-Pomraning model for the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. In both cases, we use Monte Carlo Algorithm B with weight windows automatically generated from the approximate forward scalar flux distribution to obtain the solution of the transport problem.

  9. Mass attenuation coefficient calculations of different detector crystals by means of FLUKA Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ebru Ermis, Elif; Celiktas, Cuneyt

    2015-07-01

    Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals) were carried out by means of FLUKA Monte Carlo (MC) method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST) values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.

  10. Lattice-switching Monte Carlo method for crystals of flexible molecules

    NASA Astrophysics Data System (ADS)

    Bridgwater, Sally; Quigley, David

    2014-12-01

    We discuss implementation of the lattice-switching Monte Carlo method (LSMC) as a binary sampling between two synchronized Markov chains exploring separated minima in the potential energy landscape. When expressed in this fashion, the generalization to more complex crystals is straightforward. We extend the LSMC method to a flexible model of linear alkanes, incorporating bond length and angle constraints. Within this model, we accurately locate a transition between two polymorphs of n -butane with increasing density, and suggest this as a benchmark problem for other free-energy methods.

  11. A combined approach to the estimation of statistical error of the direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Plotnikov, M. Yu.; Shkarupa, E. V.

    2015-11-01

    Presently, the direct simulation Monte Carlo (DSMC) method is widely used for solving rarefied gas dynamics problems. As applied to steady-state problems, a feature of this method is the use of dependent sample values of random variables for the calculation of macroparameters of gas flows. A new combined approach to estimating the statistical error of the method is proposed that does not practically require additional computations, and it is applicable for any degree of probabilistic dependence of sample values. Features of the proposed approach are analyzed theoretically and numerically. The approach is tested using the classical Fourier problem and the problem of supersonic flow of rarefied gas through permeable obstacle.

  12. A general method for spatially coarse-graining Metropolis Monte Carlo simulations onto a lattice

    NASA Astrophysics Data System (ADS)

    Liu, Xiao; Seider, Warren D.; Sinno, Talid

    2013-03-01

    A recently introduced method for coarse-graining standard continuous Metropolis Monte Carlo simulations of atomic or molecular fluids onto a rigid lattice of variable scale [X. Liu, W. D. Seider, and T. Sinno, Phys. Rev. E 86, 026708 (2012)], 10.1103/PhysRevE.86.026708 is further analyzed and extended. The coarse-grained Metropolis Monte Carlo technique is demonstrated to be highly consistent with the underlying full-resolution problem using a series of detailed comparisons, including vapor-liquid equilibrium phase envelopes and spatial density distributions for the Lennard-Jones argon and simple point charge water models. In addition, the principal computational bottleneck associated with computing a coarse-grained interaction function for evolving particle positions on the discretized domain is addressed by the introduction of new closure approximations. In particular, it is shown that the coarse-grained potential, which is generally a function of temperature and coarse-graining level, can be computed at multiple temperatures and scales using a single set of free energy calculations. The computational performance of the method relative to standard Monte Carlo simulation is also discussed.

  13. Visual improvement for bad handwriting based on Monte-Carlo method

    NASA Astrophysics Data System (ADS)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2014-03-01

    A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.

  14. Analysis of single Monte Carlo methods for prediction of reflectance from turbid media.

    PubMed

    Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan

    2011-09-26

    Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904

  15. A Monte Carlo method for radar backscatter from a half-space random medium

    NASA Astrophysics Data System (ADS)

    Chuah, H. T.; Tan, H. S.

    1989-01-01

    The problem of electromagnetic multiple scattering in a random medium is treated by a Monte Carlo method, in which an incident beam of photons is progressively scattered by scattering centers in the medium. The theory characterizes each scattering by functions describing the probability of the photon being scattered or absorbed, and the probability of its being scattered into certain directions. This process is tracked until the photon is finally absorbed or backscattered into the receiver. Variance reduction techniques are introduced to reduce the computation time required for acceptable ensemble averages of the backscattering cross sections. Ellipsoidal dielectric scatterers are used to model circular disk-shaped leaves, elliptical disk-shaped leaves, and needle-shaped leaves, which are randomly distributed in a half-space medium. The Monte Carlo simulations give good comparison with experimental data of backscattering cross sections from fields of wheat, corn, and milo.

  16. Generalized Moment Method for Gap Estimation and Quantum Monte Carlo Level Spectroscopy.

    PubMed

    Suwa, Hidemaro; Todo, Synge

    2015-08-21

    We formulate a convergent sequence for the energy gap estimation in the worldline quantum Monte Carlo method. The ambiguity left in the conventional gap calculation for quantum systems is eliminated. Our estimation will be unbiased in the low-temperature limit, and also the error bar is reliably estimated. The level spectroscopy from quantum Monte Carlo data is developed as an application of the unbiased gap estimation. From the spectral analysis, we precisely determine the Kosterlitz-Thouless quantum phase-transition point of the spin-Peierls model. It is established that the quantum phonon with a finite frequency is essential to the critical theory governed by the antiadiabatic limit, i.e., the k=1 SU(2) Wess-Zumino-Witten model. PMID:26340171

  17. Generalized Moment Method for Gap Estimation and Quantum Monte Carlo Level Spectroscopy

    NASA Astrophysics Data System (ADS)

    Suwa, Hidemaro; Todo, Synge

    2015-08-01

    We formulate a convergent sequence for the energy gap estimation in the worldline quantum Monte Carlo method. The ambiguity left in the conventional gap calculation for quantum systems is eliminated. Our estimation will be unbiased in the low-temperature limit, and also the error bar is reliably estimated. The level spectroscopy from quantum Monte Carlo data is developed as an application of the unbiased gap estimation. From the spectral analysis, we precisely determine the Kosterlitz-Thouless quantum phase-transition point of the spin-Peierls model. It is established that the quantum phonon with a finite frequency is essential to the critical theory governed by the antiadiabatic limit, i.e., the k =1 SU(2) Wess-Zumino-Witten model.

  18. Extrapolation method in the Monte Carlo Shell Model and its applications

    SciTech Connect

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2011-05-06

    We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking {sup 56}Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as {sup 72}Ge with f5pg9-shell. The structure of {sup 72}Se is also studied including the discussion of the shape-coexistence phenomenon.

  19. Temperature-extrapolation method for Implicit Monte Carlo - Radiation hydrodynamics calculations

    SciTech Connect

    McClarren, R. G.; Urbatsch, T. J.

    2013-07-01

    We present a method for implementing temperature extrapolation in Implicit Monte Carlo solutions to radiation hydrodynamics problems. The method is based on a BDF-2 type integration to estimate a change in material temperature over a time step. We present results for radiation only problems in an infinite medium and for a 2-D Cartesian hohlraum problem. Additionally, radiation hydrodynamics simulations are presented for an RZ hohlraum problem and a related 3D problem. Our results indicate that improvements in noise and general behavior are possible. We present considerations for future investigations and implementations. (authors)

  20. Markov chain Monte Carlo methods for statistical analysis of RF photonic devices.

    PubMed

    Piels, Molly; Zibar, Darko

    2016-02-01

    The microwave reflection coefficient is commonly used to characterize the impedance of high-speed optoelectronic devices. Error and uncertainty in equivalent circuit parameters measured using this data are systematically evaluated. The commonly used nonlinear least-squares method for estimating uncertainty is shown to give unsatisfactory and incorrect results due to the nonlinear relationship between the circuit parameters and the measured data. Markov chain Monte Carlo methods are shown to provide superior results, both for individual devices and for assessing within-die variation. PMID:26906783

  1. A high order method for orbital conjunctions analysis: Monte Carlo collision probability computation

    NASA Astrophysics Data System (ADS)

    Morselli, Alessandro; Armellin, Roberto; Di Lizia, Pierluigi; Bernelli Zazzera, Franco

    2015-01-01

    Three methods for the computation of the probability of collision between two space objects are presented. These methods are based on the high order Taylor expansion of the time of closest approach (TCA) and distance of closest approach (DCA) of the two orbiting objects with respect to their initial conditions. The identification of close approaches is first addressed using the nominal objects states. When a close approach is identified, the dependence of the TCA and DCA on the uncertainties in the initial states is efficiently computed with differential algebra (DA) techniques. In the first method the collision probability is estimated via fast DA-based Monte Carlo simulation, in which, for each pair of virtual objects, the DCA is obtained via the fast evaluation of its Taylor expansion. The second and the third methods are the DA version of Line Sampling and Subset Simulation algorithms, respectively. These are introduced to further improve the efficiency and accuracy of Monte Carlo collision probability computation, in particular for cases of very low collision probabilities. The performances of the methods are assessed on orbital conjunctions occurring in different orbital regimes and dynamical models. The probabilities obtained and the associated computational times are compared against standard (i.e. not DA-based) version of the algorithms and analytical methods. The dependence of the collision probability on the initial orbital state covariance is investigated as well.

  2. Hierarchical Acceleration of Multilevel Monte Carlo Methods for Computationally Expensive Simulations in Reservoir Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Lu, D.; Webster, C.

    2014-12-01

    The rational management of oil and gas reservoir requires an understanding of its response to existing and planned schemes of exploitation and operation. Such understanding requires analyzing and quantifying the influence of the subsurface uncertainties on predictions of oil and gas production. As the subsurface properties are typically heterogeneous causing a large number of model parameters, the dimension independent Monte Carlo (MC) method is usually used for uncertainty quantification (UQ). Recently, multilevel Monte Carlo (MLMC) methods were proposed, as a variance reduction technique, in order to improve computational efficiency of MC methods in UQ. In this effort, we propose a new acceleration approach for MLMC method to further reduce the total computational cost by exploiting model hierarchies. Specifically, for each model simulation on a new added level of MLMC, we take advantage of the approximation of the model outputs constructed based on simulations on previous levels to provide better initial states of new simulations, which will help improve efficiency by, e.g. reducing the number of iterations in linear system solving or the number of needed time-steps. This is achieved by using mesh-free interpolation methods, such as Shepard interpolation and radial basis approximation. Our approach is applied to a highly heterogeneous reservoir model from the tenth SPE project. The results indicate that the accelerated MLMC can achieve the same accuracy as standard MLMC with a significantly reduced cost.

  3. Suppressing nonphysical overheating with a modified implicit Monte Carlo method for time-dependent radiative transfer

    SciTech Connect

    Mcclarren, Ryan G; Urbatsch, Todd J

    2008-01-01

    In this note we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as {Delta}t{yields}{infinity} as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method alleviates both the nonphysical overheating that occurs in standard IMC when the time step is large and significantly diminishes the statistical noise in the solution.

  4. Dosimetric characterizations of GZP6 60Co high dose rate brachytherapy sources: application of superimposition method

    PubMed Central

    Bahreyni Toossi, Mohammad Taghi; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Meigooni, Ali Soleimani

    2012-01-01

    Background Dosimetric characteristics of a high dose rate (HDR) GZP6 Co-60 brachytherapy source have been evaluated following American Association of Physicists in MedicineTask Group 43U1 (AAPM TG-43U1) recommendations for their clinical applications. Materials and methods MCNP-4C and MCNPX Monte Carlo codes were utilized to calculate dose rate constant, two dimensional (2D) dose distribution, radial dose function and 2D anisotropy function of the source. These parameters of this source are compared with the available data for Ralstron 60Co and microSelectron192Ir sources. Besides, a superimposition method was developed to extend the obtained results for the GZP6 source No. 3 to other GZP6 sources. Results The simulated value for dose rate constant for GZP6 source was 1.104±0.03 cGyh-1U-1. The graphical and tabulated radial dose function and 2D anisotropy function of this source are presented here. The results of these investigations show that the dosimetric parameters of GZP6 source are comparable to those for the Ralstron source. While dose rate constant for the two 60Co sources are similar to that for the microSelectron192Ir source, there are differences between radial dose function and anisotropy functions. Radial dose function of the 192Ir source is less steep than both 60Co source models. In addition, the 60Co sources are showing more isotropic dose distribution than the 192Ir source. Conclusions The superimposition method is applicable to produce dose distributions for other source arrangements from the dose distribution of a single source. The calculated dosimetric quantities of this new source can be introduced as input data to the GZP6 treatment planning system (TPS) and to validate the performance of the TPS. PMID:23077455

  5. Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems

    NASA Astrophysics Data System (ADS)

    Lan, Shiwei; Bui-Thanh, Tan; Christie, Mike; Girolami, Mark

    2016-03-01

    The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows have been introduced to more effectively locally explore the configuration space of the posterior measure. However, obtaining such geometric quantities usually requires extensive computational effort and despite their effectiveness affects the applicability of these geometrically-based Monte Carlo methods. In this paper we explore one way to address this issue by the construction of an emulator of the model from which all geometric objects can be obtained in a much more computationally feasible manner. The main concept is to approximate the geometric quantities using a Gaussian Process emulator which is conditioned on a carefully chosen design set of configuration points, which also determines the quality of the emulator. To this end we propose the use of statistical experiment design methods to refine a potentially arbitrarily initialized design online without destroying the convergence of the resulting Markov chain to the desired invariant measure. The practical examples considered in this paper provide a demonstration of the significant improvement possible in terms of computational loading suggesting this is a promising avenue of further development.

  6. Calculating Relativistic Transition Matrix Elements for Hydrogenic Atoms Using Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Alexander, Steven; Coldwell, R. L.

    2015-03-01

    The nonrelativistic transition matrix elements for hydrogen atoms can be computed exactly and these expressions are given in a number of classic textbooks. The relativistic counterparts of these equations can also be computed exactly but these expressions have been described in only a few places in the literature. In part, this is because the relativistic equations lack the elegant simplicity of the nonrelativistic equations. In this poster I will describe how variational Monte Carlo methods can be used to calculate the energy and properties of relativistic hydrogen atoms and how the wavefunctions for these systems can be used to calculate transition matrix elements.

  7. A numerical study of rays in random media. [Monte Carlo method simulation

    NASA Technical Reports Server (NTRS)

    Youakim, M. Y.; Liu, C. H.; Yeh, K. C.

    1973-01-01

    Statistics of electromagnetic rays in a random medium are studied numerically by the Monte Carlo method. Two dimensional random surfaces with prescribed correlation functions are used to simulate the random media. Rays are then traced in these sample media. Statistics of the ray properties such as the ray positions and directions are computed. Histograms showing the distributions of the ray positions and directions at different points along the ray path as well as at given points in space are given. The numerical experiment is repeated for different cases corresponding to weakly and strongly random media with isotropic and anisotropic irregularities. Results are compared with those derived from theoretical investigations whenever possible.

  8. Stability of Staggered Flux State in d-p Model Studied Using Variational Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Tamura, Shun; Yokoyama, Hisatoshi

    The stability of a staggered flux (SF) or d-density wave state is studied in a d-p model using a variational Monte Carlo (VMC) method. This state possesses a local circular current and possibly causes the pseudogap phase in high-Tc cuprate superconductors. We introduce into the trial function a configuration dependent phase factor, which was recently shown to be indispensable to stabilize current-carrying states. We pay attention to the energy gain as a function of adjacent O-O hopping

  9. Bayesian and Markov chain Monte Carlo methods for identifying nonlinear systems in the presence of uncertainty

    PubMed Central

    Green, P. L.; Worden, K.

    2015-01-01

    In this paper, the authors outline the general principles behind an approach to Bayesian system identification and highlight the benefits of adopting a Bayesian framework when attempting to identify models of nonlinear dynamical systems in the presence of uncertainty. It is then described how, through a summary of some key algorithms, many of the potential difficulties associated with a Bayesian approach can be overcome through the use of Markov chain Monte Carlo (MCMC) methods. The paper concludes with a case study, where an MCMC algorithm is used to facilitate the Bayesian system identification of a nonlinear dynamical system from experimentally observed acceleration time histories. PMID:26303916

  10. Refinement of overlapping local/global iteration method based on Monte Carlo/p-CMFD calculations

    SciTech Connect

    Jo, Y.; Yun, S.; Cho, N. Z.

    2013-07-01

    In this paper, the overlapping local/global (OLG) iteration method based on Monte Carlo/p-CMFD calculations is refined in two aspects. One is the consistent use of estimators to generate homogenized scattering cross sections. Another is that the incident or exiting angular interval is divided into multi-angular bins to modulate albedo boundary conditions for local problems. Numerical tests show that, compared to the one angle bin case in a previous study, the four angle bin case shows significantly improved results. (authors)

  11. Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy

    NASA Astrophysics Data System (ADS)

    Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui

    2014-06-01

    Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.

  12. Using self-consistent fields to bias Monte Carlo methods with applications to designing and sampling protein sequences

    NASA Astrophysics Data System (ADS)

    Zou, Jinming; Saven, Jeffery G.

    2003-02-01

    For complex multidimensional systems, Monte Carlo methods are useful for sampling probable regions of a configuration space and, in the context of annealing, for determining "low energy" or "high scoring" configurations. Such methods have been used in protein design as means to identify amino acid sequences that are energetically compatible with a particular backbone structure. As with many other applications of Monte Carlo methods, such searches can be inefficient if trial configurations (protein sequences) in the Markov chain are chosen randomly. Here a mean-field biased Monte Carlo method (MFBMC) is presented and applied to designing and sampling protein sequences. The MFBMC method uses predetermined sequence identity probabilities wi(α) to bias the sequence selection. The wi(α) are calculated using a self-consistent, mean-field theory that can estimate the number and composition of sequences having predetermined values of energetically related foldability criteria. The MFBMC method is applied to both a simple protein model, the 27-mer lattice model, and an all-atom protein model. Compared to conventional Monte Carlo (MC) and configurational bias Monte Carlo (BMC), the MFBMC method converges faster to low energy sequences and samples such sequences more efficiently. The MFBMC method also tolerates faster cooling rates than the MC and BMC methods. The MFBMC method can be applied not only to protein sequence search, but also to a wide variety of polymeric and condensed phase systems.

  13. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    SciTech Connect

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amount of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)

  14. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin

    2015-12-01

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.

  15. Application of the Monte Carlo Method for the Estimation of Uncertainty in Radiofrequency Field Spot Measurements

    NASA Astrophysics Data System (ADS)

    Iakovidis, S.; Apostolidis, C.; Samaras, T.

    2015-04-01

    The objective of the present work is the application of the Monte Carlo method (GUMS1) for evaluating uncertainty in electromagnetic field measurements and the comparison of the results with the ones obtained using the 'standard' method (GUM). In particular, the two methods are applied in order to evaluate the field measurement uncertainty using a frequency selective radiation meter and the Total Exposure Quotient (TEQ) uncertainty. Comparative results are presented in order to highlight cases where GUMS1 results deviate significantly from the ones obtained using GUM, such as the presence of a non-linear mathematical model connecting the inputs with the output quantity (case of the TEQ model) or the presence of a dominant nonnormal distribution of an input quantity (case of U-shaped mismatch uncertainty). The deviation of the results obtained from the two methods can even lead to different decisions regarding the conformance with the exposure reference levels.

  16. Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method

    NASA Astrophysics Data System (ADS)

    Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.

    2013-02-01

    Diffusion Monte Carlo is a highly accurate Quantum Monte Carlo method for electronic structure calculations of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency on parallel machines. This step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm: a simple renumbering of the MPI ranks based on proximity and a space filling curve significantly improves the MPI Allgather performance. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL show up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that require load balancing.

  17. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Kim, S.

    2011-10-01

    Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC) methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC) methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP), is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF) and the sequential importance resampling (SIR) particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  18. Efficient implementation of a Monte Carlo method for uncertainty evaluation in dynamic measurements

    NASA Astrophysics Data System (ADS)

    Eichstädt, S.; Link, A.; Harris, P.; Elster, C.

    2012-06-01

    Measurement of quantities having time-dependent values such as force, acceleration or pressure is a topic of growing importance in metrology. The application of the Guide to the Expression of Uncertainty in Measurement (GUM) and its Supplements to the evaluation of uncertainty for such quantities is challenging. We address the efficient implementation of the Monte Carlo method described in GUM Supplements 1 and 2 for this task. The starting point is a time-domain observation equation. The steps of deriving a corresponding measurement model, the assignment of probability distributions to the input quantities in the model, and the propagation of the distributions through the model are all considered. A direct implementation of a Monte Carlo method can be intractable on many computers since the storage requirement of the method can be large compared with the available computer memory. Two memory-efficient alternatives to the direct implementation are proposed. One approach is based on applying updating formulae for calculating means, variances and point-wise histograms. The second approach is based on evaluating the measurement model sequentially in time. A simulated example is used to compare the performance of the direct and alternative procedures.

  19. Efficient combination of Wang-Landau and transition matrix Monte Carlo methods for protein simulations.

    PubMed

    Ghulghazaryan, Ruben G; Hayryan, Shura; Hu, Chin-Kun

    2007-02-01

    An efficient combination of the Wang-Landau and transition matrix Monte Carlo methods for protein and peptide simulations is described. At the initial stage of simulation the algorithm behaves like the Wang-Landau algorithm, allowing to sample the entire interval of energies, and at the later stages, it behaves like transition matrix Monte Carlo method and has significantly lower statistical errors. This combination allows to achieve fast convergence to the correct values of density of states. We propose that the violation of TTT identities may serve as a qualitative criterion to check the convergence of density of states. The simulation process can be parallelized by cutting the entire interval of simulation into subintervals. The violation of ergodicity in this case is discussed. We test the algorithm on a set of peptides of different lengths and observe good statistical convergent properties for the density of states. We believe that the method is of general nature and can be used for simulations of other systems with either discrete or continuous energy spectrum. PMID:17195159

  20. HRMC: Hybrid Reverse Monte Carlo method with silicon and carbon potentials

    NASA Astrophysics Data System (ADS)

    Opletal, G.; Petersen, T. C.; O'Malley, B.; Snook, I. K.; McCulloch, D. G.; Yarovsky, I.

    2008-05-01

    Fortran 77 code is presented for a hybrid method of the Metropolis Monte Carlo (MMC) and Reverse Monte Carlo (RMC) for the simulation of amorphous silicon and carbon structures. In additional to the usual constraints of the pair correlation functions and average coordination, the code also incorporates an optional energy constraint. This energy constraint is in the form of either the Environment Dependent Interatomic Potential (applicable to silicon and carbon) and the original and modified Stillinger-Weber potentials (applicable to silicon). The code also allows porous systems to be modeled via a constraint on porosity and internal surface area using a novel restriction on the available simulation volume. Program summaryProgram title: HRMC version 1.0 Catalogue identifier: AEAO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 200 894 No. of bytes in distributed program, including test data, etc.: 907 557 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: Any computer capable of running executables produced by the g77 Fortran compiler Operating system: Unix, Windows RAM: Depends on the type of empirical potential use, number of atoms and which constraints are employed Classification: 7.7 Nature of problem: Atomic modeling using empirical potentials and experimental data Solution method: Monte Carlo Additional comments: The code is not standard FORTRAN 77 but includes some additional features and therefore generates errors when compiled using the Nag95 compiler. It does compile successfully with the GNU g77 compiler ( http://www.gnu.org/software/fortran/fortran.html). Running time: Depends on the type of empirical potential use, number of atoms and which constraints are employed. The test included in the distribution took 37 minutes on a DEC Alpha PC.

  1. The applicability of certain Monte Carlo methods to the analysis of interacting polymers

    SciTech Connect

    Krapp, D.M. Jr.

    1998-05-01

    The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.

  2. Self-optimizing Monte Carlo method for nuclear well logging simulation

    NASA Astrophysics Data System (ADS)

    Liu, Lianyan

    1997-09-01

    In order to increase the efficiency of Monte Carlo simulation for nuclear well logging problems, a new method has been developed for variance reduction. With this method, an importance map is generated in the regular Monte Carlo calculation as a by-product, and the importance map is later used to conduct the splitting and Russian roulette for particle population control. By adopting a spatial mesh system, which is independent of physical geometrical configuration, the method allows superior user-friendliness. This new method is incorporated into the general purpose Monte Carlo code MCNP4A through a patch file. Two nuclear well logging problems, a neutron porosity tool and a gamma-ray lithology density tool are used to test the performance of this new method. The calculations are sped up over analog simulation by 120 and 2600 times, for the neutron porosity tool and for the gamma-ray lithology density log, respectively. The new method enjoys better performance by a factor of 4~6 times than that of MCNP's cell-based weight window, as per the converged figure-of-merits. An indirect comparison indicates that the new method also outperforms the AVATAR process for gamma-ray density tool problems. Even though it takes quite some time to generate a reasonable importance map from an analog run, a good initial map can create significant CPU time savings. This makes the method especially suitable for nuclear well logging problems, since one or several reference importance maps are usually available for a given tool. Study shows that the spatial mesh sizes should be chosen according to the mean-free-path. The overhead of the importance map generator is 6% and 14% for neutron and gamma-ray cases. The learning ability towards a correct importance map is also demonstrated. Although false-learning may happen, physical judgement can help diagnose with contributon maps. Calibration and analysis are performed for the neutron tool and the gamma-ray tool. Due to the fact that a very good initial importance map is always available after the first point has been calculated, high computing efficiency is maintained. The availability of contributon maps provides an easy way of understanding the logging measurement and analyzing for the depth of investigation.

  3. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  4. Monte Carlo based statistical power analysis for mediation models: methods and software.

    PubMed

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method. PMID:24338601

  5. Reconstruction of magnetic domain structure using the reverse Monte Carlo method with an extended Fourier image

    NASA Astrophysics Data System (ADS)

    Tokii, Maki; Kita, Eiji; Mitsumata, Chiharu; Ono, Kanta; Yanagihara, Hideto; Matsumoto, Makoto

    2015-05-01

    Visualization of the magnetic domain structure is indispensable to the investigation of magnetization processes and the coercivity mechanism. It is necessary to develop a reconstruction method from the reciprocal-space image to the real-space image. For this purpose, it is necessary to solve the problem of missing phase information in the reciprocal-space image. We propose the method of extend Fourier image with mean-value padding to compensate for the phase information. We visualized the magnetic domain structure using the Reverse Monte Carlo method with simulated annealing to accelerate the calculation. With this technique, we demonstrated the restoration of the magnetic domain structure, obtained magnetization and magnetic domain width, and reproduced the characteristic form that constitutes a magnetic domain.

  6. An asymptotic preserving Monte Carlo method for the multispecies Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Liu, Hong; Jin, Shi

    2016-01-01

    An asymptotic preserving (AP) scheme is efficient in solving multiscale kinetic equations with a wide range of the Knudsen number. In this paper, we generalize the asymptotic preserving Monte Carlo method (AP-DSMC) developed in [25] to the multispecies Boltzmann equation. This method is based on the successive penalty method [26] originated from the BGK-penalization-based AP scheme developed in [7]. For the multispecies Boltzmann equation, the penalizing Maxwellian should use the unified Maxwellian as suggested in [12]. We give the details of AP-DSMC for multispecies Boltzmann equation, show its AP property, and verify through several numerical examples that the scheme can allow time step much larger than the mean free time, thus making it much more efficient for flows with possibly small Knudsen numbers than the classical DSMC.

  7. Method to account for arbitrary strains in kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Subramanian, Gopinath; Perez, Danny; Uberuaga, Blas P.; Tomé, Carlos N.; Voter, Arthur F.

    2013-04-01

    We present a method for efficiently recomputing rates in a kinetic Monte Carlo simulation when the existing rate catalog is modified by the presence of a strain field. We use the concept of the dipole tensor to estimate the changes in the kinetic barriers that comprise the catalog, thereby obviating the need for recomputing them from scratch. The underlying assumptions in the method are that linear elasticity is valid, and that the topology of the underlying potential energy surface (and consequently, the fundamental structure of the rate catalog) is not changed by the strain field. As a simple test case, we apply the method to a single vacancy in zirconium diffusing in the strain field of a dislocation, and discuss the consequences of the assumptions on simulating more complex materials.

  8. Application of scalar Monte Carlo probability density function method for turbulent spray flames

    SciTech Connect

    Raju, M.S.

    1996-12-01

    The objective of the present work is twofold: (1) extend the coupled Monte Carlo probability density function (PDF)/computational fluid dynamics (CFD) computations to the modeling of turbulent spray flames, and (2) extend the PDF/SPRAY/CFD module to parallel computing in order to facilitate large-scale combustor computations. In this approach, the mean gas phase velocity and turbulence fields are determined from a standard turbulence model, the joint composition of species and enthalpy from the solution of a modeled PDF transport equation, and a Lagrangian-based dilute spray model is used for the liquid-phase representation. The PDF transport equation is solved by a Monte Carlo method, and the mean gas phase velocity and turbulence fields together with the liquid phase equations are solved by existing state-of-the-art numerical representations. The application of the method to both open as well as confined axisymmetric swirl-stabilized spray flames shows good agreement with the measured data. Preliminary estimates indicate that it is well within reach of today`s modern parallel computer to do a realistic gas turbine combustor simulation within a reasonable turnaround time. The article provides complete details of the overall algorithm, parallelization, and other numerical issues related to coupling between the three solvers.

  9. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Kim, S.

    2011-04-01

    Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC) methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC) is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI). We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter.

  10. Efficient continuous-time quantum Monte Carlo method for the ground state of correlated fermions

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Iazzi, Mauro; Corboz, Philippe; Troyer, Matthias

    2015-06-01

    We present the ground state extension of the efficient continuous-time quantum Monte Carlo algorithm for lattice fermions of M. Iazzi and M. Troyer, Phys. Rev. B 91, 241118 (2015), 10.1103/PhysRevB.91.241118. Based on continuous-time expansion of an imaginary-time projection operator, the algorithm is free of systematic error and scales linearly with projection time and interaction strength. Compared to the conventional quantum Monte Carlo methods for lattice fermions, this approach has greater flexibility and is easier to combine with powerful machinery such as histogram reweighting and extended ensemble simulation techniques. We discuss the implementation of the continuous-time projection in detail using the spinless t -V model as an example and compare the numerical results with exact diagonalization, density matrix renormalization group, and infinite projected entangled-pair states calculations. Finally we use the method to study the fermionic quantum critical point of spinless fermions on a honeycomb lattice and confirm previous results concerning its critical exponents.

  11. Adapting phase-switch Monte Carlo method for flexible organic molecules

    NASA Astrophysics Data System (ADS)

    Bridgwater, Sally; Quigley, David

    2014-03-01

    The role of cholesterol in lipid bilayers has been widely studied via molecular simulation, however, there has been relatively little work on crystalline cholesterol in biological environments. Recent work has linked the crystallisation of cholesterol in the body with heart attacks and strokes. Any attempt to model this process will require new models and advanced sampling methods to capture and quantify the subtle polymorphism of solid cholesterol, in which two crystalline phases are separated by a phase transition close to body temperature. To this end, we have adapted phase-switch Monte Carlo for use with flexible molecules, to calculate the free energy between crystal polymorphs to a high degree of accuracy. The method samples an order parameter , which divides a displacement space for the N molecules, into regions energetically favourable for each polymorph; which is traversed using biased Monte Carlo. Results for a simple model of butane will be presented, demonstrating that conformational flexibility can be correctly incorporated within a phase-switching scheme. Extension to a coarse grained model of cholesterol and the resulting free energies will be discussed.

  12. Investigation of a New Monte Carlo Method for the Transitional Gas Flow

    SciTech Connect

    Luo, X.; Day, Chr.

    2011-05-20

    The Direct Simulation Monte Carlo method (DSMC) is well developed for rarefied gas flow in transition flow regime when 0.0110, the gas flow is free molecular and can be simulated by the Test Particle Monte Carlo method (TPMC) without any problem even for a complex 3D vacuum system. In this paper we will investigate the approach to extend the TPMC to the transition flow regime by considering the collision between gas molecules as an interaction between a probe molecule and the gas background. Recently this collision mechanism has been implemented into ProVac3D, a new TPMC simulation program developed by Karlsruhe Institute of Technology (KIT). The preliminary simulation result shows a correct nonlinear increasing of the gas flow. However, there is still a quantitative discrepancy with the experimental data, which means further improvement is needed.

  13. Efficient Markov Chain Monte Carlo Methods for Decoding Neural Spike Trains

    PubMed Central

    Ahmadian, Yashar; Pillow, Jonathan W.; Paninski, Liam

    2016-01-01

    Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is log-concave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly non-Gaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for Gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the “hit-and-run” algorithm performed better than other MCMC methods. Using these algorithms we show that for this latter class of priors the posterior mean estimate can have a considerably lower average error than MAP, whereas for Gaussian priors the two estimators have roughly equal efficiency. We also address the application of MCMC methods for extracting non-marginal properties of the posterior distribution. For example, by using MCMC to calculate the mutual information between the stimulus and response, we verify the validity of a computationally efficient Laplace approximation to this quantity for Gaussian priors in a wide range of model parameters; this makes direct model-based computation of the mutual information tractable even in the case of large observed neural populations, where methods based on binning the spike train fail. Finally, we consider the effect of uncertainty in the GLM parameters on the posterior estimators. PMID:20964539

  14. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGESBeta

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  15. Use of Monte Carlo Methods for Evaluating Probability of False Positives in Archaeoastronomy Alignments

    NASA Astrophysics Data System (ADS)

    Hull, Anthony B.; Ambruster, C.; Jewell, E.

    2012-01-01

    Simple Monte Carlo simulations can assist both the cultural astronomy researcher while the Research Design is developed and the eventual evaluators of research products. Following the method we describe allows assessment of the probability for there to be false positives associated with a site. Even seemingly evocative alignments may be meaningless, depending on the site characteristics and the number of degrees of freedom the researcher allows. In many cases, an observer may have to limit comments to "it is nice and it might be culturally meaningful, rather than saying "it is impressive so it must mean something". We describe a basic language with an associated set of attributes to be cataloged. These can be used to set up simple Monte Carlo simulations for a site. Without collaborating cultural evidence, or trends with similar attributes (for example a number of sites showing the same anticipatory date), the Monte Carlo simulation can be used as a filter to establish the likeliness that the observed alignment phenomena is the result of random factors. Such analysis may temper any eagerness to prematurely attribute cultural meaning to an observation. For the most complete description of an archaeological site, we urge researchers to capture the site attributes in a manner which permits statistical analysis. We also encourage cultural astronomers to record that which does not work, and that which may seem to align, but has no discernable meaning. Properly reporting situational information as tenets of the research design will reduce the subjective nature of archaeoastronomical interpretation. Examples from field work will be discussed.

  16. Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas

    SciTech Connect

    Bobylev, A.V.; Potapenko, I.F.

    2013-08-01

    Highlights: •A general approach to Monte Carlo methods for multicomponent plasmas is proposed. •We show numerical tests for the two-component (electrons and ions) case. •An optimal choice of parameters for speeding up the computations is discussed. •A rigorous estimate of the error of approximation is proved. -- Abstract: A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau–Fokker–Planck equations by Boltzmann equations of quasi-Maxwellian kind. It means that the total collision frequency for the corresponding Boltzmann equation does not depend on the velocities. This allows to make the simulation process very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes the well-known methods of Takizuka and Abe (1977) [12] and Nanbu (1997) as particular cases, and generalizes the approach of Bobylev and Nanbu (2000). The numerical scheme of this paper is simpler than the schemes by Takizuka and Abe [12] and by Nanbu. We derive it for the general case of multicomponent plasmas and show some numerical tests for the two-component (electrons and ions) case. An optimal choice of parameters for speeding up the computations is also discussed. It is also proved that the order of approximation is not worse than O(√(ε)), where ε is a parameter of approximation being equivalent to the time step Δt in earlier methods. A similar estimate is obtained for the methods of Takizuka and Abe and Nanbu.

  17. Improved infrared thermography based image construction for biomedical applications using Markov Chain Monte Carlo method.

    PubMed

    Umadevi, V; Suresh, S; Raghavan, S V

    2009-01-01

    Breast thermography is one of the scanning techniques used for breast cancer detection. Looking at breast thermal image it is difficult to interpret parameters or tumor such as depth, size and location which are useful for diagnosis and treatment of breast cancer. In our previous work (ITBIC) we proposed a framework for estimation of tumor size using clever algorithms and the radiative heat transfer model. In this paper, we expand it to incorporate the more realistic Pennes bio-heat transfer model and Markov Chain Monte Carlo (MCMC) method, and analyze it's performance in terms of computational speed, accuracy, robustness against noisy inputs, ability to make use of prior information and ability to estimate multiple parameters simultaneously. We discuss the influence of various parameters used in its implementation. We apply this method on clinical data and extract reliable results for the first time using breast thermography. PMID:20198744

  18. The Linked Neighbour List (LNL) method for fast off-lattice Monte Carlo simulations of fluids

    NASA Astrophysics Data System (ADS)

    Mazzeo, M. D.; Ricci, M.; Zannoni, C.

    2010-03-01

    We present a new algorithm, called linked neighbour list (LNL), useful to substantially speed up off-lattice Monte Carlo simulations of fluids by avoiding the computation of the molecular energy before every attempted move. We introduce a few variants of the LNL method targeted to minimise memory footprint or augment memory coherence and cache utilisation. Additionally, we present a few algorithms which drastically accelerate neighbour finding. We test our methods on the simulation of a dense off-lattice Gay-Berne fluid subjected to periodic boundary conditions observing a speedup factor of about 2.5 with respect to a well-coded implementation based on a conventional link-cell. We provide several implementation details of the different key data structures and algorithms used in this work.

  19. Recent developments in auxiliary-field quantum Monte Carlo methods for cold atoms

    NASA Astrophysics Data System (ADS)

    Shi, Hao; Rosenberg, Peter; Vitali, Ettore; Chiesa, Simone; Zhang, Shiwei

    Exact calculations are performed on the two-dimensional strongly interacting, unpolarized, uniform Fermi gas with a zero-range attractive interaction. We describe recent advances in auxiliary-field quantum Monte Carlo techniques, which eliminate an infinite variance problem in the standard algorithm, and improve both acceptance ratio and efficiency. The new methods enable calculations on large enough lattices to reliably compute ground-state properties in the thermodynamic limit. An equation of state is obtained, with a parametrization provided, which can serve as a benchmark and allow accurate comparisons with experiments. The pressure, contact parameter, condensate fraction, and pairing gap will be presented. The same methods are also applied to obtain exact results on the two-dimensional strongly interacting Fermi gas in the presence of Rashba spin-orbit (SOC), providing insights on the interplay between pairing and SOC. Supported by NSF, DOE, and the Simons Foundation.

  20. Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings.

    PubMed

    Sadeghi, K; Gauthier, J L; Field, G D; Greschner, M; Agne, M; Chichilnisky, E J; Paninski, L

    2013-01-01

    It has recently become possible to identify cone photoreceptors in primate retina from multi-electrode recordings of ganglion cell spiking driven by visual stimuli of sufficiently high spatial resolution. In this paper we present a statistical approach to the problem of identifying the number, locations, and color types of the cones observed in this type of experiment. We develop an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using a Linear-Nonlinear-Poisson (LNP) encoding model of ganglion cell spiking output, while analytically integrating out the functional weights between cones and ganglion cells. This method provides information about our posterior certainty about the inferred cone properties, and additionally leads to improvements in both the speed and quality of the inferred cone maps, compared to earlier "greedy" computational approaches. PMID:23194406

  1. Monte Carlo method for predicting of cardiac toxicity: hERG blocker compounds.

    PubMed

    Gobbi, Marco; Beeg, Marten; Toropova, Mariya A; Toropov, Andrey A; Salmona, Mario

    2016-05-27

    The estimation of the cardiotoxicity of compounds is an important task for the drug discovery as well as for the risk assessment in ecological aspect. The experimental estimation of the above endpoint is complex and expensive. Hence, the theoretical computational methods are very attractive alternative of the direct experiment. A model for cardiac toxicity of 400 hERG blocker compounds (pIC50) is built up using the Monte Carlo method. Three different splits into the visible training set (in fact, the training set plus the calibration set) and invisible validation sets examined. The predictive potential is very good for all examined splits. The statistical characteristics for the external validation set are (i) the coefficient of determination r(2)=(0.90-0.93); and (ii) root-mean squared error s=(0.30-0.40). PMID:27067105

  2. DSMC calculations for the double ellipse. [direct simulation Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Moss, James N.; Price, Joseph M.; Celenligil, M. Cevdet

    1990-01-01

    The direct simulation Monte Carlo (DSMC) method involves the simultaneous computation of the trajectories of thousands of simulated molecules in simulated physical space. Rarefied flow about the double ellipse for test case 6.4.1 has been calculated with the DSMC method of Bird. The gas is assumed to be nonreacting nitrogen flowing at a 30 degree incidence with respect to the body axis, and for the surface boundary conditions, the wall is assumed to be diffuse with full thermal accommodation and at a constant wall temperature of 620 K. A parametric study is presented that considers the effect of variations of computational domain, gas model, cell size, and freestream density on surface quantities.

  3. Estimation of pressure-particle velocity impedance measurement uncertainty using the Monte Carlo method.

    PubMed

    Brandão, Eric; Flesch, Rodolfo C C; Lenzi, Arcanjo; Flesch, Carlos A

    2011-07-01

    The pressure-particle velocity (PU) impedance measurement technique is an experimental method used to measure the surface impedance and the absorption coefficient of acoustic samples in situ or under free-field conditions. In this paper, the measurement uncertainty of the the absorption coefficient determined using the PU technique is explored applying the Monte Carlo method. It is shown that because of the uncertainty, it is particularly difficult to measure samples with low absorption and that difficulties associated with the localization of the acoustic centers of the sound source and the PU sensor affect the quality of the measurement roughly to the same extent as the errors in the transfer function between pressure and particle velocity do. PMID:21786864

  4. Applications of the Monte Carlo method in nuclear physics using the GEANT4 toolkit

    SciTech Connect

    Moralles, Mauricio; Guimaraes, Carla C.; Menezes, Mario O.; Bonifacio, Daniel A. B.; Okuno, Emico; Guimaraes, Valdir; Murata, Helio M.; Bottaro, Marcio

    2009-06-03

    The capabilities of the personal computers allow the application of Monte Carlo methods to simulate very complex problems that involve the transport of particles through matter. Among the several codes commonly employed in nuclear physics problems, the GEANT4 has received great attention in the last years, mainly due to its flexibility and possibility to be improved by the users. Differently from other Monte Carlo codes, GEANT4 is a toolkit written in object oriented language (C++) that includes the mathematical engine of several physical processes, which are suitable to be employed in the transport of practically all types of particles and heavy ions. GEANT4 has also several tools to define materials, geometry, sources of radiation, beams of particles, electromagnetic fields, and graphical visualization of the experimental setup. After a brief description of the GEANT4 toolkit, this presentation reports investigations carried out by our group that involve simulations in the areas of dosimetry, nuclear instrumentation and medical physics. The physical processes available for photons, electrons, positrons and heavy ions were used in these simulations.

  5. Applications of the Monte Carlo method in nuclear physics using the GEANT4 toolkit

    NASA Astrophysics Data System (ADS)

    Moralles, Maurício; Guimarães, Carla C.; Bonifácio, Daniel A. B.; Okuno, Emico; Murata, Hélio M.; Bottaro, Márcio; Menezes, Mário O.; Guimarães, Valdir

    2009-06-01

    The capabilities of the personal computers allow the application of Monte Carlo methods to simulate very complex problems that involve the transport of particles through matter. Among the several codes commonly employed in nuclear physics problems, the GEANT4 has received great attention in the last years, mainly due to its flexibility and possibility to be improved by the users. Differently from other Monte Carlo codes, GEANT4 is a toolkit written in object oriented language (C++) that includes the mathematical engine of several physical processes, which are suitable to be employed in the transport of practically all types of particles and heavy ions. GEANT4 has also several tools to define materials, geometry, sources of radiation, beams of particles, electromagnetic fields, and graphical visualization of the experimental setup. After a brief description of the GEANT4 toolkit, this presentation reports investigations carried out by our group that involve simulations in the areas of dosimetry, nuclear instrumentation and medical physics. The physical processes available for photons, electrons, positrons and heavy ions were used in these simulations.

  6. GUINEVERE experiment: Kinetic analysis of some reactivity measurement methods by deterministic and Monte Carlo codes

    SciTech Connect

    Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.

    2012-07-01

    The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)

  7. Calculations of alloy phases with a direct Monte-Carlo method

    SciTech Connect

    Faulkner, J.S.; Wang, Yang; Horvath, E.A.; Stocks, G.M.

    1994-09-01

    A method for calculating the boundaries that describe solid-solid phase transformations in the phase diagrams of alloys is described. The method is first-principles in the sense that the only input is the atomic numbers of the constituents. It proceeds from the observation that the crux of the Monte-Carlo method for obtaining the equilibrium distribution of atoms in an alloy is a calculation of the energy required to replace an A atom on site i with a B atom when the configuration of the atoms on the neighboring sites, {kappa}, is specified, {delta}H{sub {kappa}}(A{yields}B) = E{sub B}{kappa} -E{sub A}{kappa}. Normally, this energy difference is obtained by introducing interatomic potentials, v{sub ij}, into an Ising Hamiltonian, but the authors calculate it using the embedded cluster method (ECM). In the ECM an A or B atom is placed at the center of a cluster of atoms with the specified configuration K, and the atoms on all the other sites in the alloy are simulated by the effective scattering matrix obtained from the coherent potential approximation. The interchange energy is calculated directly from the electronic structure of the cluster. The table of {delta}H{sub {kappa}}(A{yields}B)`s for all configurations K and several alloy concentrations is used in a Monte Carlo calculation that predicts the phase of the alloy at any temperature and concentration. The detailed shape of the miscibility gaps in the palladium-rhodium and copper-nickel alloy systems are shown.

  8. Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling

    SciTech Connect

    Peplow, Douglas E.; Miller, Thomas Martin; Patton, Bruce W; Wagner, John C

    2013-01-01

    The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and the SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.

  9. Quantum Monte Carlo method for the ground state of many-boson systems

    SciTech Connect

    Purwanto, Wirawan; Zhang Shiwei

    2004-11-01

    We formulate a quantum Monte Carlo (QMC) method for calculating the ground state of many-boson systems. The method is based on a field-theoretical approach, and is closely related to existing fermion auxiliary-field QMC methods which are applied in several fields of physics. The ground-state projection is implemented as a branching random walk in the space of permanents consisting of identical single-particle orbitals. Any single-particle basis can be used, and the method is in principle exact. We illustrate this method with a trapped atomic boson gas, where the atoms interact via an attractive or repulsive contact two-body potential. We choose as the single-particle basis a real-space grid. We compare with exact results in small systems and arbitrarily sized systems of untrapped bosons with attractive interactions in one dimension, where analytical solutions exist. We also compare with the corresponding Gross-Pitaevskii (GP) mean-field calculations for trapped atoms, and discuss the close formal relation between our method and the GP approach. Our method provides a way to systematically improve upon GP while using the same framework, capturing interaction and correlation effects with a stochastic, coherent ensemble of noninteracting solutions. We discuss various algorithmic issues, including importance sampling and the back-propagation technique for computing observables, and illustrate them with numerical studies. We show results for systems with up to N{approx}400 bosons.

  10. Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.

    2013-12-01

    We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.

  11. A modular method to handle multiple time-dependent quantities in Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Shin, J.; Perl, J.; Schümann, J.; Paganetti, H.; Faddegon, B. A.

    2012-06-01

    A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method.

  12. Finite-Temperature Variational Monte Carlo Method for Strongly Correlated Electron Systems

    NASA Astrophysics Data System (ADS)

    Takai, Kensaku; Ido, Kota; Misawa, Takahiro; Yamaji, Youhei; Imada, Masatoshi

    2016-03-01

    A new computational method for finite-temperature properties of strongly correlated electrons is proposed by extending the variational Monte Carlo method originally developed for the ground state. The method is based on the path integral in the imaginary-time formulation, starting from the infinite-temperature state that is well approximated by a small number of certain random initial states. Lower temperatures are progressively reached by the imaginary-time evolution. The algorithm follows the framework of the quantum transfer matrix and finite-temperature Lanczos methods, but we extend them to treat much larger system sizes without the negative sign problem by optimizing the truncated Hilbert space on the basis of the time-dependent variational principle (TDVP). This optimization algorithm is equivalent to the stochastic reconfiguration (SR) method that has been frequently used for the ground state to optimally truncate the Hilbert space. The obtained finite-temperature states allow an interpretation based on the thermal pure quantum (TPQ) state instead of the conventional canonical-ensemble average. Our method is tested for the one- and two-dimensional Hubbard models and its accuracy and efficiency are demonstrated.

  13. Markov Chain Monte Carlo Sampling Methods for 1D Seismic and EM Data Inversion

    Energy Science and Technology Software Center (ESTSC)

    2008-09-22

    This software provides several Markov chain Monte Carlo sampling methods for the Bayesian model developed for inverting 1D marine seismic and controlled source electromagnetic (CSEM) data. The current software can be used for individual inversion of seismic AVO and CSEM data and for joint inversion of both seismic and EM data sets. The structure of the software is very general and flexible, and it allows users to incorporate their own forward simulation codes and rockmore » physics model codes easily into this software. Although the softwae was developed using C and C++ computer languages, the user-supplied codes can be written in C, C++, or various versions of Fortran languages. The software provides clear interfaces for users to plug in their own codes. The output of this software is in the format that the R free software CODA can directly read to build MCMC objects.« less

  14. An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks

    NASA Technical Reports Server (NTRS)

    Kim, Stacy

    2011-01-01

    Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.

  15. Monte Carlo Modeling of Photon Interrogation Methods for Characterization of Special Nuclear Material

    SciTech Connect

    Pozzi, Sara A; Downar, Thomas J; Padovani, Enrico; Clarke, Shaun D

    2006-01-01

    This work illustrates a methodology based on photon interrogation and coincidence counting for determining the characteristics of fissile material. The feasibility of the proposed methods was demonstrated using a Monte Carlo code system to simulate the full statistics of the neutron and photon field generated by the photon interrogation of fissile and non-fissile materials. Time correlation functions between detectors were simulated for photon beam-on and photon beam-off operation. In the latter case, the correlation signal is obtained via delayed neutrons from photofission, which induce further fission chains in the nuclear material. An analysis methodology was demonstrated based on features selected from the simulated correlation functions and on the use of artificial neural networks. We show that the methodology can reliably differentiate between highly enriched uranium and plutonium. Furthermore, the mass of the material can be determined with a relative error of about 12%. Keywords: MCNP, MCNP-PoliMi, Artificial neural network, Correlation measurement, Photofission

  16. Studies of materials from simple metal atoms by quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Rasch, Kevin; Mitas, Lubos

    2015-03-01

    We carry out quantum Monte Carlo (QMC) calculations of systems from simple metal elements such as Li and Na with the goal of studying the cohesive/binding energies, structural characteristics as well as the accuracy of QMC methods for these elements. For Na we use test small-core pseudo potentials vs large-core pseudopotentials with the core polarization and relaxation correction potentials. We test orbital sets from several DFT functionals in order to assess the accuracy of the corresponding wave functions and fixed-node biases. It turns out that the valence correlations energies are very accurate, typically, 97% or higher in most of the tested systems. This provides a validation framework for further QMC studies of these systems in non-equilibrium conformations and at high pressures.

  17. A Monte Carlo Method for Projecting Uncertainty in 2D Lagrangian Trajectories

    NASA Astrophysics Data System (ADS)

    Robel, A.; Lozier, S.; Gary, S. F.

    2009-12-01

    In this study, a novel method is proposed for modeling the propagation of uncertainty due to subgrid-scale processes through a Lagrangian trajectory advected by ocean surface velocities. The primary motivation and application is differentiating between active and passive trajectories for sea turtles as observed through satellite telemetry. A spatiotemporal launch box is centered on the time and place of actual launch and populated with launch points. Synthetic drifters are launched at each of these locations, adding, at each time step along the trajectory, Monte Carlo perturbations in velocity scaled to the natural variability of the velocity field. The resulting trajectory cloud provides a dynamically evolving density field of synthetic drifter locations that represent the projection of subgrid-scale uncertainty out in time. Subsequently, by relaunching synthetic drifters at points along the trajectory, plots are generated in a daisy chain configuration of the “most likely passive pathways” for the drifter.

  18. Topics in structural dynamics: Nonlinear unsteady transonic flows and Monte Carlo methods in acoustics

    NASA Technical Reports Server (NTRS)

    Haviland, J. K.

    1974-01-01

    The results are reported of two unrelated studies. The first was an investigation of the formulation of the equations for non-uniform unsteady flows, by perturbation of an irrotational flow to obtain the linear Green's equation. The resulting integral equation was found to contain a kernel which could be expressed as the solution of the adjoint flow equation, a linear equation for small perturbations, but with non-constant coefficients determined by the steady flow conditions. It is believed that the non-uniform flow effects may prove important in transonic flutter, and that in such cases, the use of doublet type solutions of the wave equation would then prove to be erroneous. The second task covered an initial investigation into the use of the Monte Carlo method for solution of acoustical field problems. Computed results are given for a rectangular room problem, and for a problem involving a circular duct with a source located at the closed end.

  19. Markov Chain Monte Carlo Sampling Methods for 1D Seismic and EM Data Inversion

    SciTech Connect

    2008-09-22

    This software provides several Markov chain Monte Carlo sampling methods for the Bayesian model developed for inverting 1D marine seismic and controlled source electromagnetic (CSEM) data. The current software can be used for individual inversion of seismic AVO and CSEM data and for joint inversion of both seismic and EM data sets. The structure of the software is very general and flexible, and it allows users to incorporate their own forward simulation codes and rock physics model codes easily into this software. Although the softwae was developed using C and C++ computer languages, the user-supplied codes can be written in C, C++, or various versions of Fortran languages. The software provides clear interfaces for users to plug in their own codes. The output of this software is in the format that the R free software CODA can directly read to build MCMC objects.

  20. Calculation of electronic properties of multilayer graphene with Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Atasever, Ö.; Özdemir, M. D.; Özdemir, B.; Yarar, Z.; Özdemir, M.

    2016-03-01

    In this study, the electronic transport properties of bilayer graphene is investigated by an ensemble Monte Carlo method. The bilayer graphene has a quadratic energy dependence on wave vector near the points known as Dirac points in the reciprocal lattice. For bilayer graphene the scatterings due to acoustic and optic phonons and ionized impurities are taken into account. Velocity-time and steady state velocity-applied field curves are obtained and from the slope of velocity-field curves at low fields, the low field mobility of bilayer graphene is obtained. The dependence of mobility of bilayer graphene on temperature, electron concentration, impurity concentration, acoustic and optic deformation constants is investigated and it is observed that the most important mechanism limiting the mobility is the phonon scattering.

  1. Electronic correlation effects in a fullerene molecule studied by the variational Monte Carlo method

    SciTech Connect

    Krivnov, V.Y. ); Shamovsky, I.L. Chemistry Department, University of West Indies Mona Campus, St. Andrew, Kingston 7 ); Tornau, E.E. ); Rosengren, A. )

    1994-10-15

    Electron-correlation effects in the fullerene molecule and its ions are investigated in the framework of the Hubbard model. The variational Monte Carlo method and the Gutzwiller wave function are used. Most attention is paid to the case of intermediate interactions, but also the strong coupling limit, where the Hubbard model reduces to the antiferromagnetic Heisenberg model, is considered for the fullerene molecule. In this case we obtain a very low variational ground state energy. Futher, we have calculated the main spin correlation functions in the ground state. Only short-range order is found. The pairing energy of two electrons added to a fullerene molecule or to a fullerene ion is also calculated. Contrary to the results obtained by second-order perturbation theory, pair binding is not found.

  2. Investigation of a V{sub 15} magnetic molecular nanocluster by the Monte Carlo method

    SciTech Connect

    Khizriev, K. Sh.; Dzhamalutdinova, I. S.; Taaev, T. A.

    2013-06-15

    Exchange interactions in a V{sub 15} magnetic molecular nanocluster are considered, and the process of magnetization reversal for various values of the set of exchange constants is analyzed by the Monte Carlo method. It is shown that the best agreement between the field dependence of susceptibility and experimental results is observed for the following set of exchange interaction constants in a V{sub 15} magnetic molecular nanocluster: J = 500 K, J Prime = 150 K, J Double-Prime = 225 K, J{sub 1} = 50 K, and J{sub 2} = 50 K. It is observed for the first time that, in a strong magnetic field, for each of the three transitions from low-spin to high-spin states, the heat capacity exhibits two closely spaced maxima.

  3. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    SciTech Connect

    Slattery, S. R.; Wilson, P. P. H.; Evans, T. M.

    2013-07-01

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)

  4. The Calculation of Thermal Conductivities by Three Dimensional Direct Simulation Monte Carlo Method.

    PubMed

    Zhao, Xin-Peng; Li, Zeng-Yao; Liu, He; Tao, Wen-Quan

    2015-04-01

    Three dimensional direct simulation Monte Carlo (DSMC) method with the variable soft sphere (VSS) collision model is implemented to solve the Boltzmann equation and to acquire the heat flux between two parallel plates (Fourier Flow). The gaseous thermal conductivity of nitrogen is derived based on the Fourier's law under local equilibrium condition at temperature from 270 to 1800 K and pressure from 0.5 to 100,000 Pa and compared with the experimental data and Eucken relation from Chapman and Enskog (CE) theory. It is concluded that the present results are consistent with the experimental data but much higher than those by Eucken relation especially at high temperature. The contribution of internal energy of molecule to the gaseous thermal conductivity becomes significant as increasing the temperature. PMID:26353582

  5. An analysis of the convergence of the direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Galitzine, Cyril; Boyd, Iain D.

    2015-05-01

    In this article, a rigorous framework for the analysis of the convergence of the direct simulation Monte Carlo (DSMC) method is presented. It is applied to the simulation of two test cases: an axisymmetric jet at a Knudsen number of 0.01 and Mach number of 1 and a two-dimensional cylinder flow at a Knudsen of 0.05 and Mach 10. The rate of convergence of sampled quantities is found to be well predicted by an extended form of the Central Limit Theorem that takes into account the correlation of samples but requires the calculation of correlation spectra. A simplified analytical model that does not require correlation spectra is then constructed to model the effect of sample correlation. It is then used to obtain an a priori estimate of the convergence error.

  6. Liquid crystal free energy relaxation by a theoretically informed Monte Carlo method using a finite element quadrature approach.

    PubMed

    Armas-Prez, Julio C; Hernndez-Ortiz, Juan P; de Pablo, Juan J

    2015-12-28

    A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions. PMID:26723642

  7. Liquid crystal free energy relaxation by a theoretically informed Monte Carlo method using a finite element quadrature approach

    NASA Astrophysics Data System (ADS)

    Armas-Prez, Julio C.; Hernndez-Ortiz, Juan P.; de Pablo, Juan J.

    2015-12-01

    A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions.

  8. Multi-Physics Markov Chain Monte Carlo Methods for Subsurface Flows

    NASA Astrophysics Data System (ADS)

    Rigelo, J.; Ginting, V.; Rahunanthan, A.; Pereira, F.

    2014-12-01

    For CO2 sequestration in deep saline aquifers, contaminant transport in subsurface, and oil or gas recovery, we often need to forecast flow patterns. Subsurface characterization is a critical and challenging step in flow forecasting. To characterize subsurface properties we establish a statistical description of the subsurface properties that are conditioned to existing dynamic and static data. A Markov Chain Monte Carlo (MCMC) algorithm is used in a Bayesian statistical description to reconstruct the spatial distribution of rock permeability and porosity. The MCMC algorithm requires repeatedly solving a set of nonlinear partial differential equations describing displacement of fluids in porous media for different values of permeability and porosity. The time needed for the generation of a reliable MCMC chain using the algorithm can be too long to be practical for flow forecasting. In this work we develop fast and effective computational methods for generating MCMC chains in the Bayesian framework for the subsurface characterization. Our strategy consists of constructing a family of computationally inexpensive preconditioners based on simpler physics as well as on surrogate models such that the number of fine-grid simulations is drastically reduced in the generated MCMC chains. In particular, we introduce a huff-puff technique as screening step in a three-stage multi-physics MCMC algorithm to reduce the number of expensive final stage simulations. The huff-puff technique in the algorithm enables a better characterization of subsurface near wells. We assess the quality of the proposed multi-physics MCMC methods by considering Monte Carlo simulations for forecasting oil production in an oil reservoir.

  9. Application of a Monte Carlo method for modeling debris flow run-out

    NASA Astrophysics Data System (ADS)

    Luna, B. Quan; Cepeda, J.; Stumpf, A.; van Westen, C. J.; Malet, J. P.; van Asch, T. W. J.

    2012-04-01

    A probabilistic framework based on a Monte Carlo method for the modeling of debris flow hazards is presented. The framework is based on a dynamic model, which is combined with an explicit representation of the different parameter uncertainties. The probability distribution of these parameters is determined from an extensive collected database with information of back calibrated past events from different authors. The uncertainty in these inputs can be simulated and used to increase confidence in certain extreme run-out distances. In the Monte Carlo procedure; the input parameters of the numerical models simulating propagation and stoppage of debris flows are randomly selected. Model runs are performed using the randomly generated input values. This allows estimating the probability density function of the output variables characterizing the destructive power of debris flow (for instance depth, velocities and impact pressures) at any point along the path. To demonstrate the implementation of this method, a continuum two-dimensional dynamic simulation model that solves the conservation equations of mass and momentum was applied (MassMov2D). This general methodology facilitates the consistent combination of physical models with the available observations. The probabilistic model presented can be considered as a framework to accommodate any existing one or two dimensional dynamic model. The resulting probabilistic spatial model can serve as a basis for hazard mapping and spatial risk assessment. The outlined procedure provides a useful way for experts to produce hazard or risk maps for the typical case where historical records are either poorly documented or even completely lacking, as well as to derive confidence limits on the proposed zoning.

  10. A First-Passage Kinetic Monte Carlo method for reaction–drift–diffusion processes

    SciTech Connect

    Mauro, Ava J.; Sigurdsson, Jon Karl; Shrake, Justin; Atzberger, Paul J.; Isaacson, Samuel A.

    2014-02-15

    Stochastic reaction–diffusion models are now a popular tool for studying physical systems in which both the explicit diffusion of molecules and noise in the chemical reaction process play important roles. The Smoluchowski diffusion-limited reaction model (SDLR) is one of several that have been used to study biological systems. Exact realizations of the underlying stochastic processes described by the SDLR model can be generated by the recently proposed First-Passage Kinetic Monte Carlo (FPKMC) method. This exactness relies on sampling analytical solutions to one and two-body diffusion equations in simplified protective domains. In this work we extend the FPKMC to allow for drift arising from fixed, background potentials. As the corresponding Fokker–Planck equations that describe the motion of each molecule can no longer be solved analytically, we develop a hybrid method that discretizes the protective domains. The discretization is chosen so that the drift–diffusion of each molecule within its protective domain is approximated by a continuous-time random walk on a lattice. New lattices are defined dynamically as the protective domains are updated, hence we will refer to our method as Dynamic Lattice FPKMC or DL-FPKMC. We focus primarily on the one-dimensional case in this manuscript, and demonstrate the numerical convergence and accuracy of our method in this case for both smooth and discontinuous potentials. We also present applications of our method, which illustrate the impact of drift on reaction kinetics.

  11. Experimental evaluation of validity of simplified Monte Carlo method in proton dose calculations

    NASA Astrophysics Data System (ADS)

    Kohno, Ryosuke; Takada, Yoshihisa; Sakae, Takeji; Terunuma, Toshiyuki; Matsumoto, Keiji; Nohtomi, Akihiro; Matsuda, Hiroyuki

    2003-05-01

    It is important for proton therapy to calculate dose distributions accurately in treatment planning. Dose calculations in the body for treatment planning are converted to dose distributions in water, and the converted calculations are then generally evaluated by the dose measurements in water. In this paper, proton dose calculations were realized for a phantom simulating a clinical heterogeneity. Both dose calculations in the phantom calculated by two dose calculation methods, the range-modulated pencil beam algorithm (RMPBA) and the simplified Monte Carlo (SMC) method, and dose calculations converted to dose distributions in water by the same two methods were verified experimentally through comparison with measured distributions, respectively. For the RMPBA, though the converted calculations in water agreed moderately well with the measured ones, the calculated results in the actual phantom produced large errors. This meant that dose calculations in treatment planning should be evaluated by the dose measurements not in water but in the body with heterogeneity. On the other hand, the results calculated in the phantom, even by the less rigorous SMC method, reproduced the experimental ones well. This finding showed that actual dose distributions in the body should be predicted by the SMC method.

  12. Experimental evaluation of validity of simplified Monte Carlo method in proton dose calculations.

    PubMed

    Kohno, Ryosuke; Takada, Yoshihisa; Sakae, Takeji; Terunuma, Toshiyuki; Matsumoto, Keiji; Nohtomi, Akihiro; Matsuda, Hiroyuki

    2003-05-21

    It is important for proton therapy to calculate dose distributions accurately in treatment planning. Dose calculations in the body for treatment planning are converted to dose distributions in water, and the converted calculations are then generally evaluated by the dose measurements in water. In this paper, proton dose calculations were realized for a phantom simulating a clinical heterogeneity. Both dose calculations in the phantom calculated by two dose calculation methods, the range-modulated pencil beam algorithm (RMPBA) and the simplified Monte Carlo (SMC) method, and dose calculations converted to dose distributions in water by the same two methods were verified experimentally through comparison with measured distributions, respectively. For the RMPBA, though the converted calculations in water agreed moderately well with the measured ones, the calculated results in the actual phantom produced large errors. This meant that dose calculations in treatment planning should be evaluated by the dose measurements not in water but in the body with heterogeneity. On the other hand, the results calculated in the phantom, even by the less rigorous SMC method, reproduced the experimental ones well. This finding showed that actual dose distributions in the body should be predicted by the SMC method. PMID:12812446

  13. Uniform-acceptance force-bias Monte Carlo method with time scale to study solid-state diffusion

    NASA Astrophysics Data System (ADS)

    Mees, Maarten J.; Pourtois, Geoffrey; Neyts, Erik C.; Thijsse, Barend J.; Stesmans, André

    2012-04-01

    Monte Carlo (MC) methods have a long-standing history as partners of molecular dynamics (MD) to simulate the evolution of materials at the atomic scale. Among these techniques, the uniform-acceptance force-bias Monte Carlo (UFMC) method [G. Dereli, Mol. Simul.10.1080/08927029208022490 8, 351 (1992)] has recently attracted attention [M. Timonova , Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.81.144107 81, 144107 (2010)] thanks to its apparent capacity of being able to simulate physical processes in a reduced number of iterations compared to classical MD methods. The origin of this efficiency remains, however, unclear. In this work we derive a UFMC method starting from basic thermodynamic principles, which leads to an intuitive and unambiguous formalism. The approach includes a statistically relevant time step per Monte Carlo iteration, showing a significant speed-up compared to MD simulations. This time-stamped force-bias Monte Carlo (tfMC) formalism is tested on both simple one-dimensional and three-dimensional systems. Both test-cases give excellent results in agreement with analytical solutions and literature reports. The inclusion of a time scale, the simplicity of the method, and the enhancement of the time step compared to classical MD methods make this method very appealing for studying the dynamics of many-particle systems.

  14. The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations

    SciTech Connect

    Sellier, J.M. Dimov, I.

    2014-09-15

    The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practically unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.

  15. The all particle method: Coupled neutron, photon, electron, charged particle Monte Carlo calculations

    SciTech Connect

    Cullen, D.E.; Perkins, S.T.; Plechaty, E.F.; Rathkopf, J.A.

    1988-06-01

    At the present time a Monte Carlo transport computer code is being designed and implemented at Lawrence Livermore National Laboratory to include the transport of: neutrons, photons, electrons and light charged particles as well as the coupling between all species of particles, e.g., photon induced electron emission. Since this code is being designed to handle all particles this approach is called the ''All Particle Method''. The code is designed as a test bed code to include as many different methods as possible (e.g., electron single or multiple scattering) and will be data driven to minimize the number of methods and models ''hard wired'' into the code. This approach will allow changes in the Livermore nuclear and atomic data bases, used to described the interaction and production of particles, to be used to directly control the execution of the program. In addition this approach will allow the code to be used at various levels of complexity to balance computer running time against the accuracy requirements of specific applications. This paper describes the current design philosophy and status of the code. Since the treatment of neutrons and photons used by the All Particle Method code is more or less conventional, emphasis in this paper is placed on the treatment of electron, and to a lesser degree charged particle, transport. An example is presented in order to illustrate an application in which the ability to accurately transport electrons is important. 21 refs., 1 fig.

  16. Investigation of the uniqueness of the reverse Monte Carlo method: Studies on liquid water

    NASA Astrophysics Data System (ADS)

    Jedlovszky, P.; Bak, I.; Plinks, G.; Radnai, T.; Soper, A. K.

    1996-07-01

    Reverse Monte Carlo (RMC) simulation of liquid water has been performed on the basis of experimental partial pair correlation functions. The resulted configurations were analyzed in various aspects; the hydrogen bond angle distribution, three body correlation and orientational correlation were calculated. The question of the uniqueness of the RMC method was also examined. In order to do this, two conventional computer simulations of liquid water with different potential models were performed, and the resulted pair correlation function sets were fitted by RMC simulations. The resulted configurations were then compared to the original configurations to study how the RMC method can reproduce the original structure. We showed that the configurations produced by the RMC method are not uniquely related to the pair correlation functions even if the interactions in the original system were pairwise additive. Therefore the difference between the original simulated and the RMC configurations can be a measure of the uncertainty of the RMC results on real water. We found that RMC produces less ordered structure than the original one from various aspects. However, the orientational correlations were reproduced rather successfully. The RMC method exaggerates the amount of the close packed patches in the structure, although these patches certainly exist in liquid water.

  17. Exploration of the use of the kinetic Monte Carlo method in simulation of quantum dot growth

    NASA Astrophysics Data System (ADS)

    Ramsey, James J.

    2011-12-01

    The use of Kinetic Monte Carlo (KMC) simulations in modeling growth of quantum dots (QDs) on semiconductor surfaces is explored. The underlying theory of the KMC method and the algorithms used in KMC implementations are explained, and the merits and shortcomings of previous KMC simulations on QD growth are discussed. Exploratory research has determined that on the one hand, quantitative KMC simulation of InAs/GaAs QD growth would need to be off-lattice, but that on the other hand, the available empirical interatomic potentials needed to make such off-lattice simulation tractable are not reliable for modeling semiconductor surfaces. A qualitative Kinetic Monte Carlo model is then developed for QD growth on a (001) substrate of tetrahedrally coordinated semiconductor material. It takes into account three different kinds of anisotropy: elastic anisotropy of the substrate, anisotropy in diffusion of isolated particles deposited onto the substrate (or single-particle diffusional anisotropy), and anisotropy in the interactions amongst nearest-neighboring deposited particles. Elastic effects are taken into account through a phenomenological repulsive ring model. The results of the qualitative simulation are as follows: (1) Effects of elastic anisotropy appear more pronounced in some experiments than others, with an anisotropic model needed to reproduce the order seen in some experimental results, while an isotropic model better explains the results from other experiments. (2) The single-particle diffusional anisotropy appears to explain the disorder in arrangement of quantum dots that has been seen in several experiments. (3) Anisotropy in interactions among nearest-neighboring particles appears to explain the oblong shapes of quantum dots seen in experiments of growth of InGaAs dots on GaAs(001), and to partially explain the presence of chains of dots as well. It is concluded that while the prospects of quantitative KMC simulations of quantum dot growth face difficulties, qualitative KMC simulations can lend some physical insights and lead to new questions that may be addressed by future research.

  18. Calculation of photon pulse height distribution using deterministic and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Akhavan, Azadeh; Vosoughi, Naser

    2015-12-01

    Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.

  19. A statistical method for verifying mesh convergence in Monte Carlo simulations with application to fragmentation

    SciTech Connect

    Bishop, Joseph E.; Strack, O. E.

    2011-03-22

    A novel method is presented for assessing the convergence of a sequence of statistical distributions generated by direct Monte Carlo sampling. The primary application is to assess the mesh or grid convergence, and possibly divergence, of stochastic outputs from non-linear continuum systems. Example systems include those from fluid or solid mechanics, particularly those with instabilities and sensitive dependence on initial conditions or system parameters. The convergence assessment is based on demonstrating empirically that a sequence of cumulative distribution functions converges in the Linfty norm. The effect of finite sample sizes is quantified using confidence levels from the Kolmogorov–Smirnov statistic. The statistical method is independent of the underlying distributions. The statistical method is demonstrated using two examples: (1) the logistic map in the chaotic regime, and (2) a fragmenting ductile ring modeled with an explicit-dynamics finite element code. In the fragmenting ring example the convergence of the distribution describing neck spacing is investigated. The initial yield strength is treated as a random field. Two different random fields are considered, one with spatial correlation and the other without. Both cases converged, albeit to different distributions. The case with spatial correlation exhibited a significantly higher convergence rate compared with the one without spatial correlation.

  20. Monte Carlo modeling of proton therapy installations: a global experimental method to validate secondary neutron dose calculations

    NASA Astrophysics Data System (ADS)

    Farah, J.; Martinetti, F.; Sayah, R.; Lacoste, V.; Donadille, L.; Trompier, F.; Nauraye, C.; De Marzi, L.; Vabre, I.; Delacroix, S.; Hérault, J.; Clairand, I.

    2014-06-01

    Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.

  1. Development of a method for calibrating in vivo measurement systems using magnetic resonance imaging and Monte Carlo computations

    SciTech Connect

    Mallett, M.W.; Poston, J.W.; Hickman, D.P.

    1995-06-01

    Research efforts towards developing a new method for calibrating in vivo measurement systems using magnetic resonance imaging (MRI) and Monte Carlo computations are discussed. The method employs the enhanced three-point Dixon technique for producing pure fat and pure water MR images of the human body. The MR images are used to define the geometry and composition of the scattering media for transport calculations using the general-purpose Monte Carlo code MCNP, Version 4. A sample case for developing the new method utilizing an adipose/muscle matrix is compared with laboratory measurements. Verification of the integrated MRI-MCNP method has been done for a specially designed phantom composed of fat, water, air, and a bone-substitute material. Implementation of the MRI-MCNP method is demonstrated for a low-energy, lung counting in vivo measurement system. Limitations and solutions regarding the presented method are discussed. 15 refs., 7 figs., 4 tabs.

  2. Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods

    SciTech Connect

    Godfrey, Andrew T; Gehin, Jess C; Bekar, Kursat B; Celik, Cihangir

    2014-01-01

    The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.

  3. Adjoint-based deviational Monte Carlo methods for phonon transport calculations

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.

    2015-06-01

    In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.

  4. Cu-Au Alloys Using Monte Carlo Simulations and the BFS Method for Alloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Good, Brian; Ferrante, John

    1996-01-01

    Semi empirical methods have shown considerable promise in aiding in the calculation of many properties of materials. Materials used in engineering applications have defects that occur for various reasons including processing. In this work we present the first application of the BFS method for alloys to describe some aspects of microstructure due to processing for the Cu-Au system (Cu-Au, CuAu3, and Cu3Au). We use finite temperature Monte Carlo calculations, in order to show the influence of 'heat treatment' in the low-temperature phase of the alloy. Although relatively simple, it has enough features that could be used as a first test of the reliability of the technique. The main questions to be answered in this work relate to the existence of low temperature ordered structures for specific concentrations, for example, the ability to distinguish between rather similar phases for equiatomic alloys (CuAu I and CuAu II, the latter characterized by an antiphase boundary separating two identical phases).

  5. Systematic hierarchical coarse-graining with the inverse Monte Carlo method.

    PubMed

    Lyubartsev, Alexander P; Naômé, Aymeric; Vercauteren, Daniel P; Laaksonen, Aatto

    2015-12-28

    We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile. PMID:26723605

  6. Systematic hierarchical coarse-graining with the inverse Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto

    2015-12-01

    We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.

  7. Uncertainty quantification through the Monte Carlo method in a cloud computing setting

    NASA Astrophysics Data System (ADS)

    Cunha, Americo; Nasser, Rafael; Sampaio, Rubens; Lopes, Hlio; Breitman, Karin

    2014-05-01

    The Monte Carlo (MC) method is the most common technique used for uncertainty quantification, due to its simplicity and good statistical results. However, its computational cost is extremely high, and, in many cases, prohibitive. Fortunately, the MC algorithm is easily parallelizable, which allows its use in simulations where the computation of a single realization is very costly. This work presents a methodology for the parallelization of the MC method, in the context of cloud computing. This strategy is based on the MapReduce paradigm, and allows an efficient distribution of tasks in the cloud. This methodology is illustrated on a problem of structural dynamics that is subject to uncertainties. The results show that the technique is capable of producing good results concerning statistical moments of low order. It is shown that even a simple problem may require many realizations for convergence of histograms, which makes the cloud computing strategy very attractive (due to its high scalability capacity and low-cost). Additionally, the results regarding the time of processing and storage space usage allow one to qualify this new methodology as a solution for simulations that require a number of MC realizations beyond the standard.

  8. Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions

    NASA Astrophysics Data System (ADS)

    Ricketson, Lee

    2013-10-01

    We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.

  9. The direct simulation Monte Carlo method using unstructured adaptive mesh and its application

    NASA Astrophysics Data System (ADS)

    Wu, J.-S.; Tseng, K.-C.; Kuo, C.-H.

    2002-02-01

    The implementation of an adaptive mesh-embedding (h-refinement) scheme using unstructured grid in two-dimensional direct simulation Monte Carlo (DSMC) method is reported. In this technique, local isotropic refinement is used to introduce new mesh where the local cell Knudsen number is less than some preset value. This simple scheme, however, has several severe consequences affecting the performance of the DSMC method. Thus, we have applied a technique to remove the hanging node, by introducing the an-isotropic refinement in the interfacial cells between refined and non-refined cells. Not only does this remedy increase a negligible amount of work, but it also removes all the difficulties presented in the originals scheme. We have tested the proposed scheme for argon gas in a high-speed driven cavity flow. The results show an improved flow resolution as compared with that of un-adaptive mesh. Finally, we have used triangular adaptive mesh to compute a near-continuum gas flow, a hypersonic flow over a cylinder. The results show fairly good agreement with previous studies. In summary, the proposed simple mesh adaptation is very useful in computing rarefied gas flows, which involve both complicated geometry and highly non-uniform density variations throughout the flow field. Copyright

  10. Time-dependent many-variable variational Monte Carlo method for nonequilibrium strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Ido, Kota; Ohgoe, Takahiro; Imada, Masatoshi

    2015-12-01

    We develop a time-dependent variational Monte Carlo (t-VMC) method for quantum dynamics of strongly correlated electrons. The t-VMC method has been recently applied to bosonic systems and quantum spin systems. Here we propose a time-dependent trial wave function with many variational parameters, which is suitable for nonequilibrium strongly correlated electron systems. As the trial state, we adopt the generalized pair-product wave function with correlation factors and quantum-number projections. This trial wave function has been proven to accurately describe ground states of strongly correlated electron systems. To show the accuracy and efficiency of our trial wave function in nonequilibrium states as well, we present our benchmark results for relaxation dynamics during and after interaction quench protocols of fermionic Hubbard models. We find that our trial wave function well reproduces the exact results for the time evolution of physical quantities such as energy, momentum distribution, spin structure factor, and superconducting correlations. These results show that the t-VMC with our trial wave function offers an efficient and accurate way to study challenging problems of nonequilibrium dynamics in strongly correlated electron systems.

  11. Statistical modification analysis of helical planetary gears based on response surface method and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Guo, Fan

    2015-11-01

    Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.

  12. The estimation techniques of the time series correlations in nuclear reactor calculations by the Monte Carlo method using multiprocessor computers

    SciTech Connect

    Kalugin, M. A.; Oleynik, D. S.; Sukhino-Khomenko, E. A.

    2012-12-15

    The algorithms of estimation of the time series correlation functions in nuclear reactor calculations using the Monte Carlo method are described. Correlation functions are used for the estimation of biases, for calculations of variance taking into account the correlations between neutron generations, and for choosing skipped generations.

  13. Asteroid orbital inversion using a virtual-observation Markov-chain Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Muinonen, Karri; Granvik, Mikael; Oszkiewicz, Dagmara; Pieniluoma, Tuomo; Pentikäinen, Hanna

    2012-12-01

    A novel virtual-observation Markov-chain Monte Carlo method (MCMC) is presented for the asteroid orbital inverse problem posed by small to moderate numbers of astrometric observations. In the method, the orbital-element proposal probability density is chosen to mimic the convolution of the a posteriori density by itself: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, least-squares orbital elements are derived for the virtual observations using the Nelder-Mead downhill simplex method; third, repeating the procedure gives a difference between two sets of what can be called virtual least-squares elements; and, fourth, the difference obtained constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal density. In practice, the proposals are based on a large number of pre-computed sets of orbital elements. Virtual-observation MCMC is thus based on the characterization of the phase-space volume of solutions before the actual MCMC sampling. Virtual-observation MCMC is compared to MCMC orbital ranging, a random-walk Metropolis-Hastings algorithm based on sampling with the help of Cartesian positions at two observation dates, in the case of the near-Earth asteroid (85640) 1998 OX4. In the present preliminary comparison, the methods yield similar results for a 9.1-day observational time interval extracted from the full current astrometry of the asteroid. In the future, both of the methods are to be applied to the astrometric observations of the Gaia mission.

  14. Thermal studies of a superconducting current limiter using Monte-Carlo method

    NASA Astrophysics Data System (ADS)

    Lévêque, J.; Rezzoug, A.

    1999-07-01

    Considering the increase of the fault current level in electrical network, the current limiters become very interesting. The superconducting limiters are based on the quasi-instantaneous intrinsic transition from superconducting state to normal resistive one. Without detection of default or given order, they reduce the constraints supported by electrical installations above the fault. To avoid the destruction of the superconducting coil, the temperature must not exceed a certain value. Therefore the design of a superconducting coil needs the simultaneous resolution of an electrical equation and a thermal one. This papers deals with a resolution of this coupled problem by the method of Monte-Carlo. This method allows us to calculate the evolution of the resistance of the coil as well as the current of limitation. Experimental results are compared with theoretical ones. L'augmentation des courants de défaut dans les grands réseaux électriques ravive l'intérêt pour les limiteurs de courant. Les limiteurs supraconducteurs de courants peuvent limiter quasi-instantanément, sans donneur d'ordre ni détection de défaut, les courants de court-circuit réduisant ainsi les contraintes supportées par les installations électriques situées en amont du défaut. La limitation s'accompagne nécessairement de la transition du supraconducteur par dépassement de son courant critique. Pour éviter la destruction de la bobine supraconductrice la température ne doit pas excéder une certaine valeur. La conception d'une bobine supraconductrice exige donc la résolution simultanée d'une équation électrique et d'une équation thermique. Nous présentons une résolution de ce problème electrothermique par la méthode de Monte-Carlo. Cette méthode nous permet de calculer l'évolution de la résistance de la bobine et du courant de limitation. Des résultats expérimentaux sont comparés avec les résultats théoriques.

  15. Variational Monte Carlo Methods for Strongly Correlated Quantum Systems on Multileg Ladders

    NASA Astrophysics Data System (ADS)

    Block, Matthew S.

    Quantum mechanical systems of strongly interacting particles in two dimensions comprise a realm of condensed matter physics for which there remain many unanswered theoretical questions. In particular, the most formidable challenges may lie in cases where the ground states show no signs of ordering, break no symmetries, and support many gapless excitations. Such systems are known to exhibit exotic, disordered ground states that are notoriously difficult to study analytically using traditional perturbation techniques or numerically using the most recent methods (e.g., tensor network states) due to the large amount of spatial entanglement. Slave particle descriptions provide a glimmer of hope in the attempt to capture the fundamental, low-energy physics of these highly non-trivial phases of matter. To this end, this dissertation describes the construction and implementation of trial wave functions for use with variational Monte Carlo techniques that can easily model slave particle states. While these methods are extremely computationally tractable in two dimensions, we have applied them here to quasi-one-dimensional systems so that the results of other numerical techniques, such as the density matrix renormalization group, can be directly compared to those determined by the trial wave functions and so that exclusively one-dimensional analytic approaches, namely bosonization, can be employed. While the focus here is on the use of variational Monte Carlo, the sum of these different numerical and analytical tools has yielded a remarkable amount of insight into several exotic quantum ground states. In particular, the results of research on the d-wave Bose liquid phase, an uncondensed state of strongly correlated hard-core bosons living on the square lattice whose wave function exhibits a d-wave sign structure, and the spin Bose-metal phase, a spin-1/2, SU(2) invariant spin liquid of strongly correlated spins living on the triangular lattice, will be presented. Both phases support gapless excitations along surfaces in momentum space in two spatial dimensions and at incommensurate wave vectors in quasi-one dimension, where we have studied them on three- and four-leg ladders. An extension of this work to the study of d-wave correlated itinerant electrons will be discussed.

  16. Grid generation and adaptation for the Direct Simulation Monte Carlo Method. [for complex flows past wedges and cones

    NASA Technical Reports Server (NTRS)

    Olynick, David P.; Hassan, H. A.; Moss, James N.

    1988-01-01

    A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.

  17. Study on the dominant reaction path in nucleosynthesis during stellar evolution by means of the Monte Carlo method

    SciTech Connect

    Yamamoto, K.; Hashizume, K.; Wada, T.; Ohta, M.; Suda, T.; Nishimura, T.; Fujimoto, M. Y.; Kato, K.; Aikawa, M.

    2006-07-12

    We propose a Monte Carlo method to study the reaction paths in nucleosynthesis during stellar evolution. Determination of reaction paths is important to obtain the physical picture of stellar evolution. The combination of network calculation and our method gives us a better understanding of physical picture. We apply our method to the case of the helium shell flash model in the extremely metal poor star.

  18. A reverse Monte Carlo method for deriving optical constants of solids from reflection electron energy-loss spectroscopy spectra

    SciTech Connect

    Da, B.; Sun, Y.; Ding, Z. J.; Mao, S. F.; Zhang, Z. M.; Jin, H.; Yoshikawa, H.; Tanuma, S.

    2013-06-07

    A reverse Monte Carlo (RMC) method is developed to obtain the energy loss function (ELF) and optical constants from a measured reflection electron energy-loss spectroscopy (REELS) spectrum by an iterative Monte Carlo (MC) simulation procedure. The method combines the simulated annealing method, i.e., a Markov chain Monte Carlo (MCMC) sampling of oscillator parameters, surface and bulk excitation weighting factors, and band gap energy, with a conventional MC simulation of electron interaction with solids, which acts as a single step of MCMC sampling in this RMC method. To examine the reliability of this method, we have verified that the output data of the dielectric function are essentially independent of the initial values of the trial parameters, which is a basic property of a MCMC method. The optical constants derived for SiO{sub 2} in the energy loss range of 8-90 eV are in good agreement with other available data, and relevant bulk ELFs are checked by oscillator strength-sum and perfect-screening-sum rules. Our results show that the dielectric function can be obtained by the RMC method even with a wide range of initial trial parameters. The RMC method is thus a general and effective method for determining the optical properties of solids from REELS measurements.

  19. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  20. Quantifying uncertainties in pollutant mapping studies using the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Tan, Yi; Robinson, Allen L.; Presto, Albert A.

    2014-12-01

    Routine air monitoring provides accurate measurements of annual average concentrations of air pollutants, but the low density of monitoring sites limits its capability in capturing intra-urban variation. Pollutant mapping studies measure air pollutants at a large number of sites during short periods. However, their short duration can cause substantial uncertainty in reproducing annual mean concentrations. In order to quantify this uncertainty for existing sampling strategies and investigate methods to improve future studies, we conducted Monte Carlo experiments with nationwide monitoring data from the EPA Air Quality System. Typical fixed sampling designs have much larger uncertainties than previously assumed, and produce accurate estimates of annual average pollution concentrations approximately 80% of the time. Mobile sampling has difficulties in estimating long-term exposures for individual sites, but performs better for site groups. The accuracy and the precision of a given design decrease when data variation increases, indicating challenges in sites intermittently impact by local sources such as traffic. Correcting measurements with reference sites does not completely remove the uncertainty associated with short duration sampling. Using reference sites with the addition method can better account for temporal variations than the multiplication method. We propose feasible methods for future mapping studies to reduce uncertainties in estimating annual mean concentrations. Future fixed sampling studies should conduct two separate 1-week long sampling periods in all 4 seasons. Mobile sampling studies should estimate annual mean concentrations for exposure groups with five or more sites. Fixed and mobile sampling designs have comparable probabilities in ordering two sites, so they may have similar capabilities in predicting pollutant spatial variations. Simulated sampling designs have large uncertainties in reproducing seasonal and diurnal variations at individual sites, but are capable to predict these variations for exposure groups.

  1. On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems

    SciTech Connect

    Walsh, Jon

    2015-08-31

    The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.

  2. HRMC_1.1: Hybrid Reverse Monte Carlo method with silicon and carbon potentials

    NASA Astrophysics Data System (ADS)

    Opletal, G.; Petersen, T. C.; O'Malley, B.; Snook, I. K.; McCulloch, D. G.; Yarovsky, I.

    2011-02-01

    The Hybrid Reverse Monte Carlo (HRMC) code models the atomic structure of materials via the use of a combination of constraints including experimental diffraction data and an empirical energy potential. This energy constraint is in the form of either the Environment Dependent Interatomic Potential (EDIP) for carbon and silicon and the original and modified Stillinger-Weber potentials applicable to silicon. In this version, an update is made to correct an error in the EDIP carbon energy calculation routine. New version program summaryProgram title: HRMC version 1.1 Catalogue identifier: AEAO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 36 991 No. of bytes in distributed program, including test data, etc.: 907 800 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: Any computer capable of running executables produced by the g77 Fortran compiler. Operating system: Unix, Windows RAM: Depends on the type of empirical potential use, number of atoms and which constraints are employed. Classification: 7.7 Catalogue identifier of previous version: AEAO_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 777 Does the new version supersede the previous version?: Yes Nature of problem: Atomic modelling using empirical potentials and experimental data. Solution method: Monte Carlo Reasons for new version: An error in a term associated with the calculation of energies using the EDIP carbon potential which results in incorrect energies. Summary of revisions: Fix to correct brackets in the two body part of the EDIP carbon potential routine. Additional comments: The code is not standard FORTRAN 77 but includes some additional features and therefore generates errors when compiled using the Nag95 compiler. It does compile successfully with the GNU g77 compiler ( http://www.gnu.org/software/fortran/fortran.html). Running time: Depends on the type of empirical potential use, number of atoms and which constraints are employed. The test included in the distribution took 37 minutes on a DEC Alpha PC.

  3. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods

    SciTech Connect

    Narita, Y.; Eberl, S.; Nakamura, T.

    1996-12-31

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for {sup 99m}Tc and {sup 201}Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for {sup 99m}Tc with TDCS and TEW, respectively. For {sup 201}Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.

  4. IR imaging simulation and analysis for aeroengine exhaust system based on reverse Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Chen, Shiguo; Chen, Lihai; Mo, Dongla; Shi, Jingcheng

    2014-11-01

    The IR radiation characteristics of aeroengine are the important basis for IR stealth design and anti-stealth detection of aircraft. With the development of IR imaging sensor technology, the importance of aircraft IR stealth increases. An effort is presented to explore target IR radiation imaging simulation based on Reverse Monte Carlo Method (RMCM), which combined with the commercial CFD software. Flow and IR radiation characteristics of an aeroengine exhaust system are investigated, which developing a full size geometry model based on the actual parameters, using a flow-IR integration structured mesh, obtaining the engine performance parameters as the inlet boundary conditions of mixer section, and constructing a numerical simulation model of engine exhaust system of IR radiation characteristics based on RMCM. With the above models, IR radiation characteristics of aeroengine exhaust system is given, and focuses on the typical detecting band of IR spectral radiance imaging at azimuth 20°. The result shows that: (1) in small azimuth angle, the IR radiation is mainly from the center cone of all hot parts; near the azimuth 15°, mixer has the biggest radiation contribution, while center cone, turbine and flame stabilizer equivalent; (2) the main radiation components and space distribution in different spectrum is different, CO2 at 4.18, 4.33 and 4.45 micron absorption and emission obviously, H2O at 3.0 and 5.0 micron absorption and emission obviously.

  5. Performance characterization of multicanonical Monte Carlo method applied to polarization mode dispersion

    NASA Astrophysics Data System (ADS)

    Yamamoto, Alexandre Y.; Oliveira, Aurenice M.; Lima, Ivan T.

    2014-05-01

    The numerical accuracy of the results obtained using the multicanonical Monte Carlo (MMC) algorithm is strongly dependent on the choice of the step size, which is the range of the MMC perturbation from one sample to the next. The proper choice of the MMC step size leads to much faster statistical convergence of the algorithm for the calculation of rare events. One relevant application of this method is the calculation of the probability of the bins in the tail of the discretized probability density function of the differential group delay between the principal states of polarization due to polarization mode dispersion. We observed that the optimum MMC performance is strongly correlated with the inflection point of the actual transition rate from one bin to the next. We also observed that the optimum step size does not correspond to any specific value of the acceptance rate of the transitions in MMC. The results of this study can be applied to the improvement of the performance of MMC applied to the calculation of other rare events of interest in optical communications, such as the bit error ratio and pattern dependence in optical fiber systems with coherent receivers.

  6. Pfaffian pairing and backflow wavefunctions for electronic structure quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Bajdich, M.; Mitas, L.; Wagner, L. K.; Schmidt, K. E.

    2008-03-01

    We investigate pfaffian trial wavefunctions with singlet and triplet pair orbitals by quantum Monte Carlo methods. We present mathematical identities and the key algebraic properties necessary for efficient evaluation of pfaffians. Following upon our previous study [Bajdich , Phys. Rev. Lett. 96, 130201 (2006)], we explore the possibilities of expanding the wavefunction in linear combinations of pfaffians. We observe that molecular systems require much larger expansions than atomic systems and linear combinations of a few pfaffians lead to rather small gains in correlation energy. We also test the wavefunction based on fully antisymmetrized product of independent pair orbitals. Despite its seemingly large variational potential, we do not observe additional gains in correlation energy. We find that pfaffians lead to substantial improvements in fermion nodes when compared to Hartree-Fock wavefunctions and exhibit the minimal number of two nodal domains in agreement with recent results on fermion nodes topology. We analyze the nodal structure differences of Hartree-Fock, pfaffian, and essentially exact large-scale configuration interaction wavefunctions. Finally, we combine the recently proposed form of backflow correlations [Drummond , J. Phys. Chem. 124, 22401 (2006); Rios , Phys. Rev. E. 74, 066701 (2006)] with both determinantal and pfaffian based wavefunctions.

  7. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique.

    PubMed

    Velazquez, L; Castro-Palacio, J C

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010); J. Stat. Mech. (2010)] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989)]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L)∝(1/L)z with exponent z≃0.26±0.02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L→+∞. PMID:25871247

  8. Improving Bayesian analysis for LISA Pathfinder using an efficient Markov Chain Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ferraioli, Luigi; Porter, Edward K.; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Gibert, Ferran; Hewitson, Martin; Hueller, Mauro; Karnesis, Nikolaos; Korsakova, Natalia; Nofrarias, Miquel; Plagnol, Eric; Vitale, Stefano

    2014-02-01

    We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of the LISA Pathfinder satellite. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to LISA Pathfinder data. For this experiment, we return parameter values that are all within ˜1 σ of the injected values. When we analyse the accuracy of our parameter estimation in terms of the effect they have on the force-per-unit of mass noise, we find that the induced errors are three orders of magnitude less than the expected experimental uncertainty in the power spectral density.

  9. Development of Monte Carlo Methods for Investigating Migration of Radionuclides in Contaminated Environments

    SciTech Connect

    Avrorin, E. N.; Tsvetokhin, A. G.; Xenofontov, A. I.; Kourbatova, E. I.; Regens, J. L.

    2002-02-26

    This paper presents the results of an ongoing research and development project conducted by Russian institutions in Moscow and Snezhinsk, supported by the International Science and Technology Center (ISTC), in collaboration with the University of Oklahoma. The joint study focuses on developing and applying analytical tools to effectively characterize contaminant transport and assess risks associated with migration of radionuclides and heavy metals in the water column and sediments of large reservoirs or lakes. The analysis focuses on the development and evaluation of theoretical-computational models that describe the distribution of radioactive wastewater within a reservoir and characterize the associated radiation field as well as estimate doses received from radiation exposure. The analysis focuses on the development and evaluation of Monte Carlo-based, theoretical-computational methods that are applied to increase the precision of results and to reduce computing time for estimating the characteristics the radiation field emitted from the contaminated wastewater layer. The calculated migration of radionuclides is used to estimate distributions of radiation doses that could be received by an exposed population based on exposure to radionuclides from specified volumes of discrete aqueous sources. The calculated dose distributions can be used to support near-term and long-term decisions about priorities for environmental remediation and stewardship.

  10. Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

    SciTech Connect

    Hall, Howard L

    2012-01-01

    Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.

  11. In vivo simulation environment for fluorescence molecular tomography using Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Zhang, Yizhai; Xu, Qiong; Li, Jin; Tang, Shaojie; Zhang, Xin

    2008-12-01

    Optical sensing of specific molecular target using near-infrared light has been recognized to be the crucial technology, have changing human's future. The imaging of Fluorescence Molecular Tomography is the most novel technology in optical sensing. It uses near-infrared light(600-900nm) as instrument and utilize fluorochrome as probe to take noncontact three-dimensional imaging for live molecular targets and to exhibit molecular process in vivo. In order to solve the problem of forward simulation in FMT, this paper mainly introduces a new simulation modeling. The modeling utilizes Monte Carlo method and is implemented in C++ programming language. Ultimately its accuracy has been testified by comparing with analytic solutions and MOSE from University of Iowa and Chinese Academy of Science. The main characters of the modeling are that it can simulate both of bioluminescent imaging and FMT and take analytic calculation and support more than one source and CCD detector simultaneously. It can generate sufficient and proper data and pre-preparation for the study of fluorescence molecular tomography.

  12. Correlation between vacancies and magnetoresistance changes in FM manganites using the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Agudelo-Giraldo, J. D.; Restrepo-Parra, E.; Restrepo, J.

    2015-10-01

    The Metropolis algorithm and the classical Heisenberg approximation were implemented by the Monte Carlo method to design a computational approach to the magnetization and resistivity of La2/3Ca1/3MnO3, which depends on the Mn ion vacancies as the external magnetic field increases. This compound is ferromagnetic, and it exhibits the colossal magnetoresistance (CMR) effect. The monolayer was built with L×L×d dimensions, and it had L=30 umc (units of magnetic cells) for its dimension in the x-y plane and was d=12 umc in thickness. The Hamiltonian that was used contains interactions between first neighbors, the magnetocrystalline anisotropy effect and the external applied magnetic field response. The system that was considered contains mixed-valence bonds: Mn3+eg'-O-Mn3+eg, Mn3+eg-O-Mn4+d3 and Mn3+eg'-O-Mn4+d3. The vacancies were placed randomly in the sample, replacing any type of Mn ion. The main result shows that without vacancies, the transitions TC (Curie temperature) and TMI (metal-insulator temperature) are similar, whereas with the increase in the vacancy percentage, TMI presented lower values than TC. This situation is caused by the competition between the external magnetic field, the vacancy percentage and the magnetocrystalline anisotropy, which favors the magnetoresistive effect at temperatures below TMI. Resistivity loops were also observed, which shows a direct correlation with the hysteresis loops of magnetization at temperatures below TC.

  13. Analysis of probabilistic short run marginal cost using Monte Carlo method

    SciTech Connect

    Gutierrez-Alcaraz, G.; Navarrete, N.; Tovar-Hernandez, J.H.; Fuerte-Esquivel, C.R.; Mota-Palomino, R.

    1999-11-01

    The structure of the Electricity Supply Industry is undergoing dramatic changes to provide new services options. The main aim of this restructuring is allowing generating units the freedom of selling electricity to anybody they wish at a price determined by market forces. Several methodologies have been proposed in order to quantify different costs associated with those new services offered by electrical utilities operating under a deregulated market. The new wave of pricing is heavily influenced by economic principles designed to price products to elastic market segments on the basis of marginal costs. Hence, spot pricing provides the economic structure for many of new services. At the same time, the pricing is influenced by uncertainties associated to the electric system state variables which defined its operating point. In this paper, nodal probabilistic short run marginal costs are calculated, considering as random variables the load, the production cost and availability of generators. The effect of the electrical network is evaluated taking into account linearized models. A thermal economic dispatch is used to simulate each operational condition generated by Monte Carlo method on small fictitious power system in order to assess the effect of the random variables on the energy trading. First, this is carry out by introducing each random variable one by one, and finally considering the random interaction of all of them.

  14. Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Kramer, Richard

    2011-08-01

    Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.

  15. Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis

    SciTech Connect

    Heo, W.; Kim, W.; Kim, Y.; Yun, S.

    2013-07-01

    A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)

  16. Assessment of the Contrast to Noise Ratio in PET Scanners with Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.

    2015-09-01

    The aim of the present study was to assess the contrast to noise ratio (CNR) of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The PET scanner simulated was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution. Image quality was assessed in terms of the CNR. CNR was estimated from coronal reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL. OSMAPOSL reconstruction was assessed by using various subsets (3, 15 and 21) and various iterations (2 to 20). CNR values were found to decrease when both iterations and subsets increase. Two (2) iterations were found to be optimal. The simulated PET evaluation method, based on the TLC plane source, can be useful in image quality assessment of PET scanners.

  17. Partial site occupancy structure of decagonal AlNiCo using Monte-Carlo methods

    NASA Astrophysics Data System (ADS)

    Naidu, Siddartha; Widom, Mike; Mihalkovic, Marek

    2002-03-01

    The structure of decagonal AlNiCo was modeled using quasilattice-gas Monte-Carlo methods(M. Milhalkovic et. al.,arXiv:cond-mat/0102085) (2001) with fixed ideal sites and realistic stoichiometry. Site occupancies and pair correlation functions were computed from the simulations to determine aluminum and transition metal atom concentrations at various sites. The results were compared to structures refined from experimental data.^2,3 The experimental patterson functionfootnotemark[2] finds the same positions for the near neighbours and the site occupanciesfootnotemark[3] show similar locations for Al and TM atoms. At some ideal sites a systematic depletion of Al occupancy was found. We found that certain ideal sites relax in directions in agreement with experiment,footnotemark[3] and the site occupancies for the relaxed structure are closer to experimental values. footnotetext[2]A. Cervellino, T. Haibach and W. Steurer (Preprint, 2001) footnotetext[3]H. Takakura et. al., Acta Cryst. A 57, 576-85 (2001)

  18. Scalar and parallel optimized implementation of the direct simulation Monte Carlo method

    SciTech Connect

    Dietrich, S.; Boyd, I.D.

    1996-07-01

    This paper describes a new concept for the implementation of the direct simulation Monte Carlo (DSMC) method. It uses a localized data structure based on a computational cell to achieve high performance, especially on workstation processors, which can also be used in parallel. Since the data structure makes it possible to freely assign any cell to any processor, a domain decomposition can be found with equal calculation load on each processor while maintaining minimal communication among the nodes. Further, the new implementation strictly separates physical modeling, geometrical issues, and organizational tasks to achieve high maintainability and to simplify future enhancements. Three example flow configurations are calculated with the new implementation to demonstrate its generality and performance. They include a flow through a diverging channel using an adapted unstructured triangulated grid, a flow around a planetary probe, and an internal flow in a contactor used in plasma physics. The results are validated either by comparison with results obtained from other simulations or by comparison with experimental data. High performance on an IBM SP2 system is achieved if problem size and number of parallel processors are adapted accordingly. On 400 nodes, DSMC calculations with more than 100 million particles are possible. 19 refs., 18 figs.

  19. A method of simulating dynamic multileaf collimators using Monte Carlo techniques for intensity-modulated radiation therapy

    NASA Astrophysics Data System (ADS)

    Liu, H. Helen; Verhaegen, Frank; Dong, Lei

    2001-09-01

    A method of modelling the dynamic motion of multileaf collimators (MLCs) for intensity-modulated radiation therapy (IMRT) was developed and implemented into the Monte Carlo simulation. The simulation of the dynamic MLCs (DMLCs) was based on randomizing leaf positions during a simulation so that the number of particle histories being simulated for each possible leaf position was proportional to the monitor units delivered to that position. This approach was incorporated into an EGS4 Monte Carlo program, and was evaluated in simulating the DMLCs for Varian accelerators (Varian Medical Systems, Palo Alto, CA, USA). The MU index of each segment, which was specified in the DMLC-control data, was used to compute the cumulative probability distribution function (CPDF) for the leaf positions. This CPDF was then used to sample the leaf positions during a real-time simulation, which allowed for either the step-shoot or sweeping-leaf motion in the beam delivery. Dose intensity maps for IMRT fields were computed using the above Monte Carlo method, with its accuracy verified by film measurements. The DMLC simulation improved the operational efficiency by eliminating the need to simulate multiple segments individually. More importantly, the dynamic motion of the leaves could be simulated more faithfully by using the above leaf-position sampling technique in the Monte Carlo simulation.

  20. [Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].

    PubMed

    Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao

    2015-05-01

    Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 C. The boiling range of kerosene is 140~250 C. And the boiling range of diesel is 180~370 C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are selected for characteristic peak power integration to determine components content of mineral oil mixture of gasoline, kerosene and diesel by optimal algorithm. Compared with single point measurement of peak method and mean method, measurement sensitivity is improved about 50 times. The implementation of high precision measurement of mixture components content of gasoline, kerosene and diesel provides a practical algorithm for components content direct determination of spectra overlapping mixture without chemical separation. PMID:26415451

  1. Comprehensive modeling of special nuclear materials detection using three-dimensional deterministic and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Ghita, Gabriel M.

    Our study aim to design a useful neutron signature characterization device based on 3He detectors, a standard neutron detection methodology used in homeland security applications. Research work involved simulation of the generation, transport, and detection of the leakage radiation from Special Nuclear Materials (SNM). To accomplish research goals, we use a new methodology to fully characterize a standard "1-Ci" Plutonium-Beryllium (Pu-Be) neutron source based on 3-D computational radiation transport methods, employing both deterministic SN and Monte Carlo methodologies. Computational model findings were subsequently validated through experimental measurements. Achieved results allowed us to design, build, and laboratory-test a Nickel composite alloy shield that enables the neutron leakage spectrum from a standard Pu-Be source to be transformed, through neutron scattering interactions in the shield, into a very close approximation of the neutron spectrum leaking from a large, subcritical mass of Weapons Grade Plutonium (WGPu) metal. This source will make possible testing with a nearly exact reproduction of the neutron spectrum from a 6.67 kg WGPu mass equivalent, but without the expense or risk of testing detector components with real materials. Moreover, over thirty moderator materials were studied in order to characterize their neutron energy filtering potential. Specific focus was made to establish the limits of He-3 spectroscopy using ideal filter materials. To demonstrate our methodology, we present the optimally detected spectral differences between SNM materials (Plutonium and Uranium), metal and oxide, using ideal filter materials. Finally, using knowledge gained from previous studies, the design of a He-3 spectroscopy system neutron detector, simulated entirely via computational methods, is proposed to resolve the spectra from SNM neutron sources of high interest. This was accomplished by replacing ideal filters with real materials, and comparing reaction rates with similar data from the ideal material suite.

  2. Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling

    PubMed Central

    Kraan, Aafke Christine

    2015-01-01

    Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β+ emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects. PMID:26217586

  3. Dynamical estimation of neuron and network properties II: Path integral Monte Carlo methods.

    PubMed

    Kostuk, Mark; Toth, Bryan A; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2012-03-01

    Hodgkin-Huxley (HH) models of neuronal membrane dynamics consist of a set of nonlinear differential equations that describe the time-varying conductance of various ion channels. Using observations of voltage alone we show how to estimate the unknown parameters and unobserved state variables of an HH model in the expected circumstance that the measurements are noisy, the model has errors, and the state of the neuron is not known when observations commence. The joint probability distribution of the observed membrane voltage and the unobserved state variables and parameters of these models is a path integral through the model state space. The solution to this integral allows estimation of the parameters and thus a characterization of many biological properties of interest, including channel complement and density, that give rise to a neuron's electrophysiological behavior. This paper describes a method for directly evaluating the path integral using a Monte Carlo numerical approach. This provides estimates not only of the expected values of model parameters but also of their posterior uncertainty. Using test data simulated from neuronal models comprising several common channels, we show that short (<50 ms) intracellular recordings from neurons stimulated with a complex time-varying current yield accurate and precise estimates of the model parameters as well as accurate predictions of the future behavior of the neuron. We also show that this method is robust to errors in model specification, supporting model development for biological preparations in which the channel expression and other biophysical properties of the neurons are not fully known. PMID:22526358

  4. A Markov Chain Monte Carlo method for the groundwater inverse problem.

    SciTech Connect

    Lu, Z.; Higdon, D. M.; Zhang, D.

    2004-01-01

    In this study, we develop a Markov Chain Monte Carlo method (MCMC) to estimate the hydraulic conductivity field conditioned on the direct measurements of hydraulic conductivity and indirect measurements of dependent variables such as hydraulic head for saturated flow in randomly heterogeneous porous media. The log hydraulic conductivity field is represented (parameterized) by the combination of some basis kernels centered at fixed spatial locations. The prior distribution for the vector of coefficients {theta} are taken from a posterior distribution {pi}({theta}/d) that is proportional to the product of the likelihood function of measurements d given parameter vector {theta} and the prior distribution of {theta}. Starting from any initial setting, a partial realization of a Markov chain is generated by updating only one component of {theta} at a time according to Metropolis rules. This ensures that output from this chain has {pi}({theta}/d) as its stationary distribution. The posterior mean of the parameter {theta} (and thus the mean log hydraulic conductivity conditional to measurements on hydraulic conductivity, and hydraulic head) can be estimated from the Markov chain realizations (ignoring some early realizations). The uncertainty associated with the mean filed can also be assessed from these realizations. In addition, the MCMC approach provides an alternative for estimating conditional predictions of hydraulic head and concentration and their associated uncertainties. Numerical examples for flow in a hypothetic random porous medium show that estimated log hydraulic conductivity field from the MCMC approach is closer to the original hypothetical random field than those obtained using kriging or cokriging methods.

  5. Stochastic geometrical model and Monte Carlo optimization methods for building reconstruction from InSAR data

    NASA Astrophysics Data System (ADS)

    Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan

    2015-10-01

    Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.

  6. Towards prediction of correlated material properties using quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Wagner, Lucas

    Correlated electron systems offer a richness of physics far beyond noninteracting systems. If we would like to pursue the dream of designer correlated materials, or, even to set a more modest goal, to explain in detail the properties and effective physics of known materials, then accurate simulation methods are required. Using modern computational resources, quantum Monte Carlo (QMC) techniques offer a way to directly simulate electron correlations. I will show some recent results on a few extremely challenging materials including the metal-insulator transition of VO2, the ground state of the doped cuprates, and the pressure dependence of magnetic properties in FeSe. By using a relatively simple implementation of QMC, at least some properties of these materials can be described truly from first principles, without any adjustable parameters. Using the QMC platform, we have developed a way of systematically deriving effective lattice models from the simulation. This procedure is particularly attractive for correlated electron systems because the QMC methods treat the one-body and many-body components of the wave function and Hamiltonian on completely equal footing. I will show some examples of using this downfolding technique and the high accuracy of QMC to connect our intuitive ideas about interacting electron systems with high fidelity simulations. The work in this presentation was supported in part by NSF DMR 1206242, the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award Number FG02-12ER46875, and the Center for Emergent Superconductivity, Department of Energy Frontier Research Center under Grant No. DEAC0298CH1088. Computing resources were provided by a Blue Waters Illinois grant and INCITE PhotSuper and SuperMatSim allocations.

  7. Monte Carlo particle-in-cell methods for the simulation of the Vlasov-Maxwell gyrokinetic equations

    NASA Astrophysics Data System (ADS)

    Bottino, A.; Sonnendrücker, E.

    2015-10-01

    > The particle-in-cell (PIC) algorithm is the most popular method for the discretisation of the general 6D Vlasov-Maxwell problem and it is widely used also for the simulation of the 5D gyrokinetic equations. The method consists of coupling a particle-based algorithm for the Vlasov equation with a grid-based method for the computation of the self-consistent electromagnetic fields. In this review we derive a Monte Carlo PIC finite-element model starting from a gyrokinetic discrete Lagrangian. The variations of the Lagrangian are used to obtain the time-continuous equations of motion for the particles and the finite-element approximation of the field equations. The Noether theorem for the semi-discretised system implies a certain number of conservation properties for the final set of equations. Moreover, the PIC method can be interpreted as a probabilistic Monte Carlo like method, consisting of calculating integrals of the continuous distribution function using a finite set of discrete markers. The nonlinear interactions along with numerical errors introduce random effects after some time. Therefore, the same tools for error analysis and error reduction used in Monte Carlo numerical methods can be applied to PIC simulations.

  8. Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods

    SciTech Connect

    Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lössl, K.; Aebersold, D. M.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.

    2014-03-15

    Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan led to a similar homogeneity in the PTV compared to the standard treatment plan while the mean body dose was lower for the MERT plan. Regarding the second clinical case, a whole breast treatment, MERT resulted in a reduction of the lung volume receiving more than 45% of the prescribed dose when compared to the standard plan. On the other hand, the MERT plan leads to a larger low-dose lung volume and a degraded dose homogeneity in the PTV. For the clinical cases evaluated in this work, treatment plans using the BolusECT technique resulted in a more homogenous PTV and CTV coverage but higher doses to the OARs than the MERT plans. Conclusions: MERT treatments were successfully planned for phantom and clinical cases, applying a newly developed intuitive and efficient forward planning strategy that employs a MC based electron beam model for pMLC shaped electron beams. It is shown that MERT can lead to a dose reduction in OARs compared to other methods. The process of feathering MERT segments results in an improvement of the dose homogeneity in the PTV.

  9. Simulating the proton transfer in gramicidin A by a sequential dynamical Monte Carlo method.

    PubMed

    Till, Mirco S; Essigke, Timm; Becker, Torsten; Ullmann, G Matthias

    2008-10-23

    The large interest in long-range proton transfer in biomolecules is triggered by its importance for many biochemical processes such as biological energy transduction and drug detoxification. Since long-range proton transfer occurs on a microsecond time scale, simulating this process on a molecular level is still a challenging task and not possible with standard simulation methods. In general, the dynamics of a reactive system can be described by a master equation. A natural way to describe long-range charge transfer in biomolecules is to decompose the process into elementary steps which are transitions between microstates. Each microstate has a defined protonation pattern. Although such a master equation can in principle be solved analytically, it is often too demanding to solve this equation because of the large number of microstates. In this paper, we describe a new method which solves the master equation by a sequential dynamical Monte Carlo algorithm. Starting from one microstate, the evolution of the system is simulated as a stochastic process. The energetic parameters required for these simulations are determined by continuum electrostatic calculations. We apply this method to simulate the proton transfer through gramicidin A, a transmembrane proton channel, in dependence on the applied membrane potential and the pH value of the solution. As elementary steps in our reaction, we consider proton uptake and release, proton transfer along a hydrogen bond, and rotations of water molecules that constitute a proton wire through the channel. A simulation of 8 mus length took about 5 min on an Intel Pentium 4 CPU with 3.2 GHz. We obtained good agreement with experimental data for the proton flux through gramicidin A over a wide range of pH values and membrane potentials. We find that proton desolvation as well as water rotations are equally important for the proton transfer through gramicidin A at physiological membrane potentials. Our method allows to simulate long-range charge transfer in biological systems at time scales, which are not accessible by other methods. PMID:18826179

  10. An Investigation of the Performance of the Unified Monte Carlo Method of Neutron Cross Section Data Evaluation

    SciTech Connect

    Capote, Roberto Smith, Donald L.

    2008-12-15

    The Unified Monte Carlo method (UMC) has been suggested to avoid certain limitations and approximations inherent to the well-known Generalized Least Squares (GLS) method of nuclear data evaluation. This contribution reports on an investigation of the performance of the UMC method in comparison with the GLS method. This is accomplished by applying both methods to simple examples with few input values that were selected to explore various features of the evaluation process that impact upon the quality of an evaluation. Among the issues explored are: i) convergence of UMC results with the number of Monte Carlo histories and the ranges of sampled values; ii) a comparison of Monte Carlo sampling using the Metropolis scheme and a brute force approach; iii) the effects of large data discrepancies; iv) the effects of large data uncertainties; v) the effects of strong or weak model or experimental data correlations; and vi) the impact of ratio data and integral data. Comparisons are also made of the evaluated results for these examples when the input values are first transformed to comparable logarithmic values prior to performing the evaluation. Some general conclusions that are applicable to more realistic evaluation exercises are offered.

  11. Geometrically-compatible 3-D Monte Carlo and discrete-ordinates methods

    SciTech Connect

    Morel, J.E.; Wareing, T.A.; McGhee, J.M.; Evans, T.M.

    1998-12-31

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project was two-fold. The first purpose was to develop a deterministic discrete-ordinates neutral-particle transport scheme for unstructured tetrahedral spatial meshes, and implement it in a computer code. The second purpose was to modify the MCNP Monte Carlo radiation transport code to use adjoint solutions from the tetrahedral-mesh discrete-ordinates code to reduce the statistical variance of Monte Carlo solutions via a weight-window approach. The first task has resulted in a deterministic transport code that is much more efficient for modeling complex 3-D geometries than any previously existing deterministic code. The second task has resulted in a powerful new capability for dramatically reducing the cost of difficult 3-D Monte Carlo calculations.

  12. Neoclassical electron transport calculation by using {delta}f Monte Carlo method

    SciTech Connect

    Matsuoka, Seikichi; Satake, Shinsuke; Yokoyama, Masayuki; Wakasa, Arimitsu; Murakami, Sadayoshi

    2011-03-15

    High electron temperature plasmas with steep temperature gradient in the core are obtained in recent experiments in the Large Helical Device [A. Komori et al., Fusion Sci. Technol. 58, 1 (2010)]. Such plasmas are called core electron-root confinement (CERC) and have attracted much attention. In typical CERC plasmas, the radial electric field shows a transition phenomenon from a small negative value (ion root) to a large positive value (electron root) and the radial electric field in helical plasmas are determined dominantly by the ambipolar condition of neoclassical particle flux. To investigate such plasmas' neoclassical transport precisely, the numerical neoclassical transport code, FORTEC-3D [S. Satake et al., J. Plasma Fusion Res. 1, 002 (2006)], which solves drift kinetic equation based on {delta}f Monte Carlo method and has been applied for ion species so far, is extended to treat electron neoclassical transport. To check the validity of our new FORTEC-3D code, benchmark calculations are carried out with GSRAKE [C. D. Beidler et al., Plasma Phys. Controlled Fusion 43, 1131 (2001)] and DCOM/NNW [A. Wakasa et al., Jpn. J. Appl. Phys. 46, 1157 (2007)] codes which calculate neoclassical transport using certain approximations. The benchmark calculation shows a good agreement among FORTEC-3D, GSRAKE and DCOM/NNW codes for a low temperature (T{sub e}(0)=1.0 keV) plasma. It is also confirmed that finite orbit width effect included in FORTEC-3D affects little neoclassical transport even for the low collisionality plasma if the plasma is at the low temperature. However, for a higher temperature (5 keV at the core) plasma, significant difference arises among FORTEC-3D, GSRAKE, and DCOM/NNW. These results show an importance to evaluate electron neoclassical transport by solving the kinetic equation rigorously including effect of finite radial drift for high electron temperature plasmas.

  13. Simulation of aggregating particles in complex flows by the lattice kinetic Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Flamm, Matthew H.; Sinno, Talid; Diamond, Scott L.

    2011-01-01

    We develop and validate an efficient lattice kinetic Monte Carlo (LKMC) method for simulating particle aggregation in laminar flows with spatially varying shear rate, such as parabolic flow or flows with standing vortices. A contact time model was developed to describe the particle-particle collision efficiency as a function of the local shear rate, G, and approach angle, θ. This model effectively accounts for the hydrodynamic interactions between approaching particles, which is not explicitly considered in the LKMC framework. For imperfect collisions, the derived collision efficiency [\\varepsilon = 1 - int_0^{{π {π /2} {sin θ exp ( { - 2\\cot θ {{Γ _{agg} }/ { Γ _{agg} } G} )} dθ] was found to depend only on Γagg/G, where Γagg is the specified aggregation rate. For aggregating platelets in tube flow, Γ _{agg} = 0.683 s-1 predicts the experimentally measured ɛ across a physiological range (G = 40-1000 s-1) and is consistent with α2bβ3-fibrinogen bond dynamics. Aggregation in parabolic flow resulted in the largest aggregates forming near the wall where shear rate and residence time were maximal, however intermediate regions between the wall and the center exhibited the highest aggregation rate due to depletion of reactants nearest the wall. Then, motivated by stenotic or valvular flows, we employed the LKMC simulation developed here for baffled geometries that exhibit regions of squeezing flow and standing recirculation zones. In these calculations, the largest aggregates were formed within the vortices (maximal residence time), while squeezing flow regions corresponded to zones of highest aggregation rate.

  14. Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Basire, M.; Soudan, J.-M.; Angelié, C.

    2014-09-01

    The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the σ-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, gp(Ep) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called "corrected EAM" (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients Sij are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature Tm is plotted in terms of the cluster atom number Nat. The standard N_{at}^{-1/3} linear dependence (Pawlow law) is observed for Nat >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For Nat <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I.

  15. Neoclassical electron transport calculation by using δf Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Matsuoka, Seikichi; Satake, Shinsuke; Yokoyama, Masayuki; Wakasa, Arimitsu; Murakami, Sadayoshi

    2011-03-01

    High electron temperature plasmas with steep temperature gradient in the core are obtained in recent experiments in the Large Helical Device [A. Komori et al., Fusion Sci. Technol. 58, 1 (2010)]. Such plasmas are called core electron-root confinement (CERC) and have attracted much attention. In typical CERC plasmas, the radial electric field shows a transition phenomenon from a small negative value (ion root) to a large positive value (electron root) and the radial electric field in helical plasmas are determined dominantly by the ambipolar condition of neoclassical particle flux. To investigate such plasmas' neoclassical transport precisely, the numerical neoclassical transport code, FORTEC-3D [S. Satake et al., J. Plasma Fusion Res. 1, 002 (2006)], which solves drift kinetic equation based on δf Monte Carlo method and has been applied for ion species so far, is extended to treat electron neoclassical transport. To check the validity of our new FORTEC-3D code, benchmark calculations are carried out with GSRAKE [C. D. Beidler et al., Plasma Phys. Controlled Fusion 43, 1131 (2001)] and DCOM/NNW [A. Wakasa et al., Jpn. J. Appl. Phys. 46, 1157 (2007)] codes which calculate neoclassical transport using certain approximations. The benchmark calculation shows a good agreement among FORTEC-3D, GSRAKE and DCOM/NNW codes for a low temperature (Te(0)=1.0 keV) plasma. It is also confirmed that finite orbit width effect included in FORTEC-3D affects little neoclassical transport even for the low collisionality plasma if the plasma is at the low temperature. However, for a higher temperature (5 keV at the core) plasma, significant difference arises among FORTEC-3D, GSRAKE, and DCOM/NNW. These results show an importance to evaluate electron neoclassical transport by solving the kinetic equation rigorously including effect of finite radial drift for high electron temperature plasmas.

  16. Use of Monte Carlo methods in environmental risk assessments at the INEL: Applications and issues

    SciTech Connect

    Harris, G.; Van Horn, R.

    1996-06-01

    The EPA is increasingly considering the use of probabilistic risk assessment techniques as an alternative or refinement of the current point estimate of risk. This report provides an overview of the probabilistic technique called Monte Carlo Analysis. Advantages and disadvantages of implementing a Monte Carlo analysis over a point estimate analysis for environmental risk assessment are discussed. The general methodology is provided along with an example of its implementation. A phased approach to risk analysis that allows iterative refinement of the risk estimates is recommended for use at the INEL.

  17. Determining minimum alarm activities of orphan sources in scrap loads; Monte Carlo simulations, validated with measurements

    NASA Astrophysics Data System (ADS)

    Takoudis, G.; Xanthos, S.; Clouvas, A.; Potiriadis, C.

    2010-02-01

    Portal monitoring radiation detectors are commonly used by steel industries in the probing and detection of radioactivity contamination in scrap metal. These portal monitors typically consist of polystyrene or polyvinyltoluene (PVT) plastic scintillating detectors, one or more photomultiplier tubes (PMT), an electronic circuit, a controller that handles data output and manipulation linking the system to a display or a computer with appropriate software and usually, a light guide. Such a portal used by the steel industry was opened and all principal materials were simulated using a Monte Carlo simulation tool (MCNP4C2). Various source-detector configurations were simulated and validated by comparison with corresponding measurements. Subsequently an experiment with a uniform cargo along with two sets of experiments with different scrap loads and radioactive sources ( 137Cs, 152Eu) were performed and simulated. Simulated and measured results suggested that the nature of scrap is crucial when simulating scrap load-detector experiments. Using the same simulating configuration, a series of runs were performed in order to estimate minimum alarm activities for 137Cs, 60Co and 192Ir sources for various simulated scrap densities. The minimum alarm activities as well as the positions in which they were recorded are presented and discussed.

  18. Deciding the fate of the false Mott transition in two dimensions by exact quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Rost, D.; Blümer, N.

    2015-09-01

    We present an algorithm for the computation of unbiased Green functions and selfenergies for quantum lattice models, free from systematic errors and valid in the thermodynamic limit. The method combines direct lattice simulations using the Blankenbecler-Scalapino-Sugar quantum Monte Carlo (BSS-QMC) approach with controlled multigrid extrapolation techniques. We show that the half-filled Hubbard model is insulating at low temperatures even in the weak- coupling regime; the previously claimed Mott transition at intermediate coupling does not exist.

  19. Analysis of extended x-ray absorption fine structure data from copper tungstate by the reverse Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Timoshenko, Janis; Anspoks, Andris; Kalinko, Aleksandr; Kuzmin, Alexei

    2014-04-01

    The static disorder and lattice dynamics of crystalline materials can be efficiently studied using reverse Monte Carlo simulations of extended x-ray absorption fine structure spectra (EXAFS). In this work we demonstrate the potentiality of this method on an example of copper tungstate CuWO4. The simultaneous analysis of the Cu K and W L3 edges EXAFS spectra allowed us to follow local structure distortion as a function of temperature.

  20. Sensitivity analysis of reactivity responses using one-dimensional discrete ordinates and three-dimensional Monte Carlo methods

    SciTech Connect

    Williams, M. L.; Gehin, J. C.; Clarno, K. T.

    2006-07-01

    The TSUNAMI computational sequences currently in the SCALE 5 code system provide an automated approach to performing sensitivity and uncertainty analysis for eigenvalue responses, using either one-dimensional discrete ordinates or three-dimensional Monte Carlo methods. This capability has recently been expanded to address eigenvalue-difference responses such as reactivity changes. This paper describes the methodology and presents results obtained for an example advanced CANDU reactor design. (authors)

  1. Multi-potential interactions in plasma adopting a GPU version of the reaction ensemble Monte Carlo method

    SciTech Connect

    D'Angola, A.; Tuttafesta, M.; Guadagno, M.; Santangelo, P.; Laricchiuta, A.; Colonna, G.; Capitelli, M.

    2012-11-27

    Calculations of thermodynamic properties of Helium plasma by using the Reaction Ensemble Monte Carlo method (REMC) are presented. Non ideal effects at high pressure are observed. Calculations, performed by using Exp-6 or multi-potential curves in the case of neutral-charge interactions, show that in the thermodynamic conditions considered no significative differences are observed. Results have been obtained by using a Graphics Processing Unit (GPU)-CUDA C version of REMC.

  2. An Evaluation of a Markov Chain Monte Carlo Method for the Two-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of the Markov Chain Monte Carlo (MCMC) procedure Gibbs sampling was considered for estimation of item parameters of the two-parameter logistic model. Data for the Law School Admission Test (LSAT) Section 6 were analyzed to illustrate the MCMC procedure. In addition, simulated data sets were analyzed using the MCMC, marginal Bayesian…

  3. Prediction of polyelectrolyte polypeptide structures using Monte Carlo conformational search methods with implicit solvation modeling.

    PubMed

    Evans, J S; Chan, S I; Goddard, W A

    1995-10-01

    Many interesting proteins possess defined sequence stretches containing negatively charged amino acids. At present, experimental methods (X-ray crystallography, NMR) have failed to provide structural data for many of these sequence domains. We have applied the dihedral probability grid-Monte Carlo (DPG-MC) conformational search algorithm to a series of N- and C-capped polyelectrolyte peptides, (Glu)20, (Asp)20, (PSer)20, and (PSer-Asp)10, that represent polyanionic regions in a number of important proteins, such as parathymosin, calsequestrin, the sodium channel protein, and the acidic biomineralization proteins. The atomic charges were estimated from charge equilibration and the valence and van der Waals parameters are from DREIDING. Solvation of the carboxylate and phosphate groups was treated using sodium counterions for each charged side chain (one Na+ for COO-; two Na for CO(PO3)-2) plus a distance-dependent (shielded) dielectric constant, epsilon = epsilon 0 R, to simulate solvent water. The structures of these polyelectrolyte polypeptides were obtained by the DPG-MC conformational search with epsilon 0 = 10, followed by calculation of solvation energies for the lowest energy conformers using the protein dipole-Langevin dipole method of Warshel. These calculations predict a correlation between amino acid sequence and global folded conformational minima: 1. Poly-L-Glu20, our structural benchmark, exhibited a preference for right-handed alpha-helix (47% helicity), which approximates experimental observations of 55-60% helicity in solution. 2. For Asp- and PSer-containing sequences, all conformers exhibited a low preference for right-handed alpha-helix formation (< or = 10%), but a significant percentage (approximately 20% or greater) of beta-strand and beta-turn dihedrals were found in all three sequence cases: (1) Aspn forms supercoil conformers, with a 2:1:1 ratio of beta-turn:beta-strand:alpha-helix dihedral angles; (2) PSer20 features a nearly 1:1 ratio of beta-turn:beta-sheet dihedral preferences, with very little preference for alpha-helical structure, and possesses short regions of strand and turn combinations that give rise to a collapsed bend or hairpin structure; (3) (PSer-Asp)10 features a 3:2:1 ratio of beta-sheet:beta-turn:alpha-helix and gives rise to a superturn or C-shaped structure. PMID:8535238

  4. Magnetic interpretation by the Monte Carlo method with application to the intrusion of the Crimea

    NASA Astrophysics Data System (ADS)

    Gryshchuk, Pavlo

    2014-05-01

    The study involves the application of geophysical methods for geological mapping. Magnetic and radiometric measurements were used to delineate the intrusive bodies in Bakhchysarai region of the Crimea. Proton magnetometers used to measure the total magnetic field in the area and variation station. Scintillation radiometer used to determine the radiation dose. Due to susceptimeter measured the magnetic susceptibility of rocks. It deal with the fact that in this area of research the rock mass appears on the surface. Anomalous values of the magnetic intensity were obtained as the difference between the observed measurements and values on variation station. Through geophysical data were given maps of the anomalous magnetic field, radiation dose, and magnetic susceptibility. Geology of area consisted from magmatic rocks and overlying sedimentary rocks. The main task of research was to study the geometry and the magnetization vector of igneous rocks. Intrusive body composed of diabase and had an average magnetic susceptibility, weak dose rate and negative magnetic field. Sedimentary rocks were represented by clays. They had a low value of the magnetic susceptibility and the average dose rate. Map of magnetic susceptibility gave information about the values and distribution of magnetized bodies close to the surface. These data were used to control and elaboration the data of the magnetic properties for magnetic modelling. Magnetic anomaly map shows the distribution of magnetization in depth. Interpretation profile was located perpendicular to the strike of the intrusive body. Modelling was performed for profile of the magnetic field. Used the approach for filling by rectangular blocks of geological media. The fitting implemented for value magnetization and its vector for ever block. Fitting was carried out using the Monte Carlo method in layers from the bottom to top. After passing through all the blocks were fixed magnetic parameters of the block with the best approximation between the theoretical and observed fields i.e. object function. It was first iteration. The next iteration begins with this block. If after next access through blocks was not reduce the objective function is carried out with the passage of the last block as in the first iteration. This technique worked well for separate synthetic models. As result was obtained the geometric boundaries of geological objects. Igneous rocks are nearly vertical magnetization with respect to the current field. Perhaps, this is because the Jurassic diabase at its formation frozen in time when the magnetic poles have opposite signs in comparison to the modern magnetic field. Due to the magnetic modelling obtained geological section that consistent with geological concept.

  5. Determination of surface dose rate of indigenous (32)P patch brachytherapy source by experimental and Monte Carlo methods.

    PubMed

    Kumar, Sudhir; Srinivasan, P; Sharma, S D; Saxena, Sanjay Kumar; Bakshi, A K; Dash, Ashutosh; Babu, D A R; Sharma, D N

    2015-09-01

    Isotope production and Application Division of Bhabha Atomic Research Center developed (32)P patch sources for treatment of superficial tumors. Surface dose rate of a newly developed (32)P patch source of nominal diameter 25 mm was measured experimentally using standard extrapolation ionization chamber and Gafchromic EBT film. Monte Carlo model of the (32)P patch source along with the extrapolation chamber was also developed to estimate the surface dose rates from these sources. The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and radiochromic films are 82.03±4.18 (k=2) and 79.13±2.53 (k=2) respectively. The two values of the surface dose rates measured using the two independent experimental methods are in good agreement to each other within a variation of 3.5%. The surface dose rate to tissue (cGy/min) estimated using the MCNP Monte Carlo code works out to be 77.78±1.16 (k=2). The maximum deviation between the surface dose rates to tissue obtained by Monte Carlo and the extrapolation chamber method is 5.2% whereas the difference between the surface dose rates obtained by radiochromic film measurement and the Monte Carlo simulation is 1.7%. The three values of the surface dose rates of the (32)P patch source obtained by three independent methods are in good agreement to one another within the uncertainties associated with their measurements and calculation. This work has demonstrated that MCNP based electron transport simulations are accurate enough for determining the dosimetry parameters of the indigenously developed (32)P patch sources for contact brachytherapy applications. PMID:26086681

  6. Gamma spectrometry efficiency calibration using Monte Carlo methods to measure radioactivity of 137Cs in food samples.

    PubMed

    Alrefae, T

    2014-12-01

    A simple method of efficiency calibration for gamma spectrometry was performed. This method, which focused on measuring the radioactivity of (137)Cs in food samples, was based on Monte Carlo simulations available in the free-of-charge toolkit GEANT4. Experimentally, the efficiency values of a high-purity germanium detector were calculated for three reference materials representing three different food items. These efficiency values were compared with their counterparts produced by a computer code that simulated experimental conditions. Interestingly, the output of the simulation code was in acceptable agreement with the experimental findings, thus validating the proposed method. PMID:24214912

  7. Local-states method for the calculation of free energies in Monte Carlo simulations of lattice models

    NASA Astrophysics Data System (ADS)

    Schlijper, A. G.; van Bergen, A. R. D.; Smit, B.

    1990-01-01

    We present and demonstrate an accurate, reliable, and computationally cheap method for the calculation of free energies in Monte Carlo simulations of lattice models. Even in the critical region it yields good results with comparatively short simulation runs. The method combines upper and lower bounds on the thermodynamic limit entropy density to yield not only an accurate estimate of the free energy but a bound on the possible error as well. The method is demonstrated on the two- and three-dimensional Ising models and the three-dimensional, three-states Potts model.

  8. Improvements and considerations for size distribution retrieval from small-angle scattering data by Monte Carlo methods

    PubMed Central

    Pauw, Brian R.; Pedersen, Jan Skov; Tardif, Samuel; Takata, Masaki; Iversen, Bo B.

    2013-01-01

    Monte Carlo (MC) methods, based on random updates and the trial-and-error principle, are well suited to retrieve form-free particle size distributions from small-angle scattering patterns of non-interacting low-concentration scatterers such as particles in solution or precipitates in metals. Improvements are presented to existing MC methods, such as a non-ambiguous convergence criterion, nonlinear scaling of contributions to match their observability in a scattering measurement, and a method for estimating the minimum visibility threshold and uncertainties on the resulting size distributions. PMID:23596341

  9. Development of CT scanner models for patient organ dose calculations using Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Gu, Jianwei

    There is a serious and growing concern about the CT dose delivered by diagnostic CT examinations or image-guided radiation therapy imaging procedures. To better understand and to accurately quantify radiation dose due to CT imaging, Monte Carlo based CT scanner models are needed. This dissertation describes the development, validation, and application of detailed CT scanner models including a GE LightSpeed 16 MDCT scanner and two image guided radiation therapy (IGRT) cone beam CT (CBCT) scanners, kV CBCT and MV CBCT. The modeling process considered the energy spectrum, beam geometry and movement, and bowtie filter (BTF). The methodology of validating the scanner models using reported CTDI values was also developed and implemented. Finally, the organ doses to different patients undergoing CT scan were obtained by integrating the CT scanner models with anatomically-realistic patient phantoms. The tube current modulation (TCM) technique was also investigated for dose reduction. It was found that for RPI-AM, thyroid, kidneys and thymus received largest dose of 13.05, 11.41 and 11.56 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. For RPI-AF, thymus, small intestine and kidneys received largest dose of 10.28, 12.08 and 11.35 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. The dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. For MDCT with TCM schemas, the fetal dose can be reduced with 14%-25%. To demonstrate the applicability of the method proposed in this dissertation for modeling the CT scanner, additional MDCT scanner was modeled and validated by using the measured CTDI values. These results demonstrated that the CT scanner models in this dissertation were versatile and accurate tools for estimating dose to different patient phantoms undergoing various CT procedures. The organ doses from kV and MV CBCT were also calculated. This dissertation finally summarizes areas where future research can be performed including MV CBCT further validation and application, dose reporting software and image and dose correlation study.

  10. Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method.

    PubMed

    Basire, M; Soudan, J-M; Angelié, C

    2014-09-14

    The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the σ-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, gp(Ep) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called "corrected EAM" (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients S(ij) are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature T(m) is plotted in terms of the cluster atom number N(at). The standard N(at)(-1/3) linear dependence (Pawlow law) is observed for N(at) >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For N(at) <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I. PMID:25217913

  11. Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method

    SciTech Connect

    Basire, M.; Soudan, J.-M.; Angelié, C.

    2014-09-14

    The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the σ-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, g{sub p}(E{sub p}) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called “corrected EAM” (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients S{sub ij} are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature T{sub m} is plotted in terms of the cluster atom number N{sub at}. The standard N{sub at}{sup −1/3} linear dependence (Pawlow law) is observed for N{sub at} >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For N{sub at} <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I.

  12. A two-dimensional simulation of the GEC RF reference cell using a hybrid fluid-Monte Carlo method

    SciTech Connect

    Pak, H.; Riley, M.E.

    1992-12-01

    A two-dimensional fluid-Monte Carlo hybrid model is used to simulate the GEC reference cell. The 2-D model assumes azimuthal symmetry and accounts for the ground shield about the electrodes as well as the grounded chamber walls. The hybrid model consists of a Monte Carlo method for generating rates and a fluid model for transporting electrons and ions. In the fluid model, the electrons are transported using the continuity equation; and the electric fields are solved self-consistently using Poisson`s equation. The Monte Carlo model transports electrons using the fluid-generated periodic electric field. The ionization rates are then obtained using the electron energy distribution function. An averaging method is used to speed the solution by transporting the ions in a time-averaged electric field with a corrected ambipolar-type diffusion. The simulation switches between the conventional and the averaging fluid model. Typically, the simulation runs from 10`s to 100`s of averaging fluid cycles before reentering the conventional fluid model for 10`s of cycles. Speed increases of a factor of 100 are possible.

  13. Gaussian-Basis Monte Carlo Method for Numerical Study on Ground States of Itinerant and Strongly Correlated Electron Systems

    NASA Astrophysics Data System (ADS)

    Aimi, Takeshi; Imada, Masatoshi

    2007-08-01

    We examine Gaussian-basis Monte Carlo (GBMC) method introduced by Corney and Drummond. This method is based on an expansion of the density-matrix operator \\hatρ by means of the coherent Gaussian-type operator basis \\hatΛ and does not suffer from the minus sign problem. The original method, however, often fails in reproducing the true ground state and causes systematic errors of calculated physical quantities because the samples are often trapped in some metastable or symmetry broken states. To overcome this difficulty, we combine the quantum-number projection scheme proposed by Assaad, Werner, Corboz, Gull, and Troyer in conjunction with the importance sampling of the original GBMC method. This improvement allows us to carry out the importance sampling in the quantum-number-projected phase-space. Some comparisons with the previous quantum-number projection scheme indicate that, in our method, the convergence with the ground state is accelerated, which makes it possible to extend the applicability and widen the range of tractable parameters in the GBMC method. The present scheme offers an efficient practical way of computation for strongly correlated electron systems beyond the range of system sizes, interaction strengths and lattice structures tractable by other computational methods such as the quantum Monte Carlo method.

  14. Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Sohn, Ilyoup

    During extreme-Mach number reentry into Earth's atmosphere, spacecraft experience hypersonic non-equilibrium flow conditions that dissociate molecules and ionize atoms. Such situations occur behind a shock wave leading to high temperatures, which have an adverse effect on the thermal protection system and radar communications. Since the electronic energy levels of gaseous species are strongly excited for high Mach number conditions, the radiative contribution to the total heat load can be significant. In addition, radiative heat source within the shock layer may affect the internal energy distribution of dissociated and weakly ionized gas species and the number density of ablative species released from the surface of vehicles. Due to the radiation total heat load to the heat shield surface of the vehicle may be altered beyond mission tolerances. Therefore, in the design process of spacecrafts the effect of radiation must be considered and radiation analyses coupled with flow solvers have to be implemented to improve the reliability during the vehicle design stage. To perform the first stage for radiation analyses coupled with gas-dynamics, efficient databasing schemes for emission and absorption coefficients were developed to model radiation from hypersonic, non-equilibrium flows. For bound-bound transitions, spectral information including the line-center wavelength and assembled parameters for efficient calculations of emission and absorption coefficients are stored for typical air plasma species. Since the flow is non-equilibrium, a rate equation approach including both collisional and radiatively induced transitions was used to calculate the electronic state populations, assuming quasi-steady-state (QSS). The Voigt line shape function was assumed for modeling the line broadening effect. The accuracy and efficiency of the databasing scheme was examined by comparing results of the databasing scheme with those of NEQAIR for the Stardust flowfield. An accuracy of approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking into account non-local radiation. The definition of the escape factor which is dependent on the incoming radiative intensity from over all directions is presented. The effect of the escape factor on the distribution of electronic state populations of the atomic N and O radiating species is examined in a highly non-equilibrium flow condition using DSMC and PMC methods and the corresponding change of the radiative heat flux due to the non-local radiation is also investigated.

  15. Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations

    SciTech Connect

    Van Siclen, Clinton D

    2007-02-01

    A computationally simple way to accommodate "basins" of trapping states in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.

  16. The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units

    SciTech Connect

    Hall, Clifford; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 ; Ji, Weixiao; Blaisten-Barojas, Estela; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030

    2014-02-01

    We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.

  17. Monte Carlo method based radiative transfer simulation of stochastic open forest generated by circle packing application

    NASA Astrophysics Data System (ADS)

    Jin, Shengye; Tamura, Masayuki

    2013-10-01

    Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is unclear to us. We assume the proportion between Hc and Ht as a random number in the interval from 2.0 to 3.0. De Wit's sphere leaf angle distribution function was used within the canopy for acceleration. Finally, we simulate the open forest albedo using MCRT method. The MCRT algorithm of this study is summarized as follows (1) Initialize the photon with a position (r0), source direction (Ω0) and intensity (I0), respectively. (2) Simulate the free path (s) of a photon under the condition of (r', Ω, I') in the canopy. (3) Calculate the new position of the photon r=r +sΩ'. (4) Determine the new scattering direction (Ω)after collision at, r and then calculate the new intensity I = ΥL(ΩL,Ω'-->Ω)I'.(5) Accumulate the intensity I of a photon escaping from the top boundary of the 3-D Scene, otherwise redo from step (2), until I is smaller than a threshold. (6) Repeat from step (1), for each photon. We testify the model on four different simulated open forests and the effectiveness of the model is demonstrated in details.

  18. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

    NASA Astrophysics Data System (ADS)

    Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.

    2014-06-01

    Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.

  19. Application of dose kernel calculation using a simplified Monte Carlo method to treatment plan for scanned proton beams.

    PubMed

    Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo

    2016-01-01

    Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduc-tion of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine. PMID:27074456

  20. Bootstrapping & Separable Monte Carlo Simulation Methods Tailored for Efficient Assessment of Probability of Failure of Dynamic Systems

    NASA Astrophysics Data System (ADS)

    Jehan, Musarrat

    The response of a dynamic system is random. There is randomness in both the applied loads and the strength of the system. Therefore, to account for the uncertainty, the safety of the system must be quantified using its probability of survival (reliability). Monte Carlo Simulation (MCS) is a widely used method for probabilistic analysis because of its robustness. However, a challenge in reliability assessment using MCS is that the high computational cost limits the accuracy of MCS. Haftka et al. [2010] developed an improved sampling technique for reliability assessment called separable Monte Carlo (SMC) that can significantly increase the accuracy of estimation without increasing the cost of sampling. However, this method was applied to time-invariant problems involving two random variables only. This dissertation extends SMC to random vibration problems with multiple random variables. This research also develops a novel method for estimation of the standard deviation of the probability of failure of a structure under static or random vibration. The method is demonstrated on quarter car models and a wind turbine. The proposed method is validated using repeated standard MCS.

  1. Weak second-order splitting schemes for Lagrangian Monte Carlo particle methods for the composition PDF/FDF transport equations

    NASA Astrophysics Data System (ADS)

    Wang, Haifeng; Popov, Pavel P.; Pope, Stephen B.

    2010-03-01

    We study a class of methods for the numerical solution of the system of stochastic differential equations (SDEs) that arises in the modeling of turbulent combustion, specifically in the Monte Carlo particle method for the solution of the model equations for the composition probability density function (PDF) and the filtered density function (FDF). This system consists of an SDE for particle position and a random differential equation for particle composition. The numerical methods considered advance the solution in time with (weak) second-order accuracy with respect to the time step size. The four primary contributions of the paper are: (i) establishing that the coefficients in the particle equations can be frozen at the mid-time (while preserving second-order accuracy), (ii) examining the performance of three existing schemes for integrating the SDEs, (iii) developing and evaluating different splitting schemes (which treat particle motion, reaction and mixing on different sub-steps), and (iv) developing the method of manufactured solutions (MMS) to assess the convergence of Monte Carlo particle methods. Tests using MMS confirm the second-order accuracy of the schemes. In general, the use of frozen coefficients reduces the numerical errors. Otherwise no significant differences are observed in the performance of the different SDE schemes and splitting schemes.

  2. Direct simulation Monte Carlo method for the Uehling-Uhlenbeck-Boltzmann equation.

    PubMed

    Garcia, Alejandro L; Wagner, Wolfgang

    2003-11-01

    In this paper we describe a direct simulation Monte Carlo algorithm for the Uehling-Uhlenbeck-Boltzmann equation in terms of Markov processes. This provides a unifying framework for both the classical Boltzmann case as well as the Fermi-Dirac and Bose-Einstein cases. We establish the foundation of the algorithm by demonstrating its link to the kinetic equation. By numerical experiments we study its sensitivity to the number of simulation particles and to the discretization of the velocity space, when approximating the steady-state distribution. PMID:14682907

  3. Dynamic light scattering Monte Carlo: a method for simulating time-varying dynamics for ordered motion in heterogeneous media.

    PubMed

    Davis, Mitchell A; Dunn, Andrew K

    2015-06-29

    Few methods exist that can accurately handle dynamic light scattering in the regime between single and highly multiple scattering. We demonstrate dynamic light scattering Monte Carlo (DLS-MC), a numerical method by which the electric field autocorrelation function may be calculated for arbitrary geometries if the optical properties and particle motion are known or assumed. DLS-MC requires no assumptions regarding the number of scattering events, the final form of the autocorrelation function, or the degree of correlation between scattering events. Furthermore, the method is capable of rapidly determining the effect of particle motion changes on the autocorrelation function in heterogeneous samples. We experimentally validated the method and demonstrated that the simulations match both the expected form and the experimental results. We also demonstrate the perturbation capabilities of the method by calculating the autocorrelation function of flow in a representation of mouse microvasculature and determining the sensitivity to flow changes as a function of depth. PMID:26191723

  4. Direct Monte Carlo simulation methods for nonreacting and reacting systems at fixed total internal energy or enthalpy.

    PubMed

    Smith, William R; Lísal, Martin

    2002-07-01

    A Monte Carlo computer simulation method is presented for directly performing property predictions for fluid systems at fixed total internal energy, U, or enthalpy, H, using a molecular-level system model. The method is applicable to both nonreacting and reacting systems. Potential applications are to (1) adiabatic flash (Joule-Thomson expansion) calculations for nonreacting pure fluids and mixtures at fixed (H,P), where P is the pressure; and (2) adiabatic (flame-temperature) calculations at fixed (U,V) or (H,P), where V is the system volume. The details of the method are presented. The method is compared with existing related simulation methodologies for nonreacting systems, one of which addresses the problem involving fixing portions of U or of H, and one of which solves the problem at fixed H considered here by means of an indirect approach. We illustrate the method by an adiabatic calculation involving the ammonia synthesis reaction. PMID:12241338

  5. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  6. Fast Protein Loop Sampling and Structure Prediction Using Distance-Guided Sequential Chain-Growth Monte Carlo Method

    PubMed Central

    Tang, Ke; Zhang, Jinfeng; Liang, Jie

    2014-01-01

    Loops in proteins are flexible regions connecting regular secondary structures. They are often involved in protein functions through interacting with other molecules. The irregularity and flexibility of loops make their structures difficult to determine experimentally and challenging to model computationally. Conformation sampling and energy evaluation are the two key components in loop modeling. We have developed a new method for loop conformation sampling and prediction based on a chain growth sequential Monte Carlo sampling strategy, called Distance-guided Sequential chain-Growth Monte Carlo (DiSGro). With an energy function designed specifically for loops, our method can efficiently generate high quality loop conformations with low energy that are enriched with near-native loop structures. The average minimum global backbone RMSD for 1,000 conformations of 12-residue loops is Å, with a lowest energy RMSD of Å, and an average ensemble RMSD of Å. A novel geometric criterion is applied to speed up calculations. The computational cost of generating 1,000 conformations for each of the x loops in a benchmark dataset is only about cpu minutes for 12-residue loops, compared to ca cpu minutes using the FALCm method. Test results on benchmark datasets show that DiSGro performs comparably or better than previous successful methods, while requiring far less computing time. DiSGro is especially effective in modeling longer loops (– residues). PMID:24763317

  7. Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Moralles, M.; Guimarães, C. C.; Okuno, E.

    2005-06-01

    Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF 2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the range 20-300 kV, were obtained by simulating a X-ray Philips MG-450 tube associated with the recommended filters. A realistic photon distribution of a 60Co radiotherapy source was taken from results of Monte Carlo simulations found in the literature. Comparison between simulated and experimental results revealed that the attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account, while this effect is negligible for lithium fluoride. Differences between results obtained by heating the dosimeter from the irradiated side and from the opposite side allowed the determination of the light attenuation coefficient for CaF 2:NaCl (mass proportion 60:40) as 2.2 mm -1.

  8. Accounting for inhomogeneous broadening in nano-optics by electromagnetic modeling based on Monte Carlo methods

    PubMed Central

    Gudjonson, Herman; Kats, Mikhail A.; Liu, Kun; Nie, Zhihong; Kumacheva, Eugenia; Capasso, Federico

    2014-01-01

    Many experimental systems consist of large ensembles of uncoupled or weakly interacting elements operating as a single whole; this is particularly the case for applications in nano-optics and plasmonics, including colloidal solutions, plasmonic or dielectric nanoparticles on a substrate, antenna arrays, and others. In such experiments, measurements of the optical spectra of ensembles will differ from measurements of the independent elements as a result of small variations from element to element (also known as polydispersity) even if these elements are designed to be identical. In particular, sharp spectral features arising from narrow-band resonances will tend to appear broader and can even be washed out completely. Here, we explore this effect of inhomogeneous broadening as it occurs in colloidal nanopolymers comprising self-assembled nanorod chains in solution. Using a technique combining finite-difference time-domain simulations and Monte Carlo sampling, we predict the inhomogeneously broadened optical spectra of these colloidal nanopolymers and observe significant qualitative differences compared with the unbroadened spectra. The approach combining an electromagnetic simulation technique with Monte Carlo sampling is widely applicable for quantifying the effects of inhomogeneous broadening in a variety of physical systems, including those with many degrees of freedom that are otherwise computationally intractable. PMID:24469797

  9. The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units

    NASA Astrophysics Data System (ADS)

    Hall, Clifford; Ji, Weixiao; Blaisten-Barojas, Estela

    2014-02-01

    We present a CPU-GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU-GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU-GPU duets.

  10. On the use of chemical reaction rates with discrete internal energies in the direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Gimelshein, S. F.; Gimelshein, N. E.; Levin, D. A.; Ivanov, M. S.; Wysong, I. J.

    2004-07-01

    The conventional chemical reaction models of the direct simulation Monte Carlo method developed with the assumption of continuous rotational or vibrational modes that are shown to exhibit systematic errors when used with discrete energy modes. A reaction model is proposed that is consistent with the use of discrete energy distributions of rotational and vibrational modes, and is equally applicable to diatomic and polyatomic systems. The sensitivity of the model to variations of different reaction rate parameters is examined. The revised chemical reaction model is then applied to the modeling of hypersonic flows over spacecraft in the Martian and Earth atmospheres.

  11. Comment on ``Simulation of a two-dimensional Rayleigh-Benard system using the direct simulation Monte Carlo method``

    SciTech Connect

    Garcia, A.L.; Baras, F.; Mansour, M.M.

    1995-04-01

    In a recent paper, Watanabe, Kaburaki, and Yokokawa [Phys. Rev. E 49, 4060 (1994)] used a direct simulation Monte Carlo method to study Rayleigh-Benard convection. They reported that, using stress-free boundary conditions, the onset of convection in the simulation occurred at a Rayleigh number much larger than the critical Rayleigh number predicted by a linear stability analysis. We show that the source of the discrepancy is their omission of a temperature jump effect in the calculation of the Rayleigh number.

  12. Prognostics of slurry pumps based on a moving-average wear degradation index and a general sequential Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tse, Peter W.

    2015-05-01

    Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.

  13. Multi-level Monte Carlo finite volume methods for uncertainty quantification of acoustic wave propagation in random heterogeneous layered medium

    NASA Astrophysics Data System (ADS)

    Mishra, S.; Schwab, Ch.; Šukys, J.

    2016-05-01

    We consider the very challenging problem of efficient uncertainty quantification for acoustic wave propagation in a highly heterogeneous, possibly layered, random medium, characterized by possibly anisotropic, piecewise log-exponentially distributed Gaussian random fields. A multi-level Monte Carlo finite volume method is proposed, along with a novel, bias-free upscaling technique that allows to represent the input random fields, generated using spectral FFT methods, efficiently. Combined together with a recently developed dynamic load balancing algorithm that scales to massively parallel computing architectures, the proposed method is able to robustly compute uncertainty for highly realistic random subsurface formations that can contain a very high number (millions) of sources of uncertainty. Numerical experiments, in both two and three space dimensions, illustrating the efficiency of the method are presented.

  14. Comparison of dose estimates using the buildup-factor method and a Baryon transport code (BRYNTRN) with Monte Carlo results

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.

    1990-01-01

    Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.

  15. Investigation of the spectral reflectance and bidirectional reflectance distribution function of sea foam layer by the Monte Carlo method.

    PubMed

    Ma, L X; Wang, F Q; Wang, C A; Wang, C C; Tan, J Y

    2015-11-20

    Spectral properties of sea foam greatly affect ocean color remote sensing and aerosol optical thickness retrieval from satellite observation. This paper presents a combined Mie theory and Monte Carlo method to investigate visible and near-infrared spectral reflectance and bidirectional reflectance distribution function (BRDF) of sea foam layers. A three-layer model of the sea foam is developed in which each layer is composed of large air bubbles coated with pure water. A pseudo-continuous model and Mie theory for coated spheres is used to determine the effective radiative properties of sea foam. The one-dimensional Cox-Munk surface roughness model is used to calculate the slope density functions of the wind-blown ocean surface. A Monte Carlo method is used to solve the radiative transfer equation. Effects of foam layer thickness, bubble size, wind speed, solar zenith angle, and wavelength on the spectral reflectance and BRDF are investigated. Comparisons between previous theoretical results and experimental data demonstrate the feasibility of our proposed method. Sea foam can significantly increase the spectral reflectance and BRDF of the sea surface. The absorption coefficient of seawater near the surface is not the only parameter that influences the spectral reflectance. Meanwhile, the effects of bubble size, foam layer thickness, and solar zenith angle also cannot be obviously neglected. PMID:26836550

  16. Cosmic ray ionization and dose at Mars: Benchmarking deterministic and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Norman, R. B.; Gronoff, G.; Mertens, C. J.

    2014-12-01

    The ability to evaluate the cosmic ray environment at Mars is of interest for future manned exploration. To support exploration, tools must be developed to accurately access the radiation environment in both free space and on planetary surfaces. The primary tool NASA uses to quantify radiation exposure behind shielding materials is the space radiation transport code, HZETRN. In order to build confidence in HZETRN, code benchmarking against Monte Carlo radiation transport codes is often used. This work compares the dose calculations at Mars by HZETRN and the GEANT4 application, Planetocosmics. The dose at ground and the energy deposited in the atmosphere by galactic cosmic ray protons and alpha particles has been calculated for the Curiosity landing conditions. In addition, this work has considered Solar Energetic Particle events, which allows for a better understanding of the spectral form in the comparison. The results for protons and alpha particles show very good agreement between HZETRN and Planetocosmics.

  17. Torsional path integral Monte Carlo method for the quantum simulation of large molecules

    NASA Astrophysics Data System (ADS)

    Miller, Thomas F.; Clary, David C.

    2002-05-01

    A molecular application is introduced for calculating quantum statistical mechanical expectation values of large molecules at nonzero temperatures. The Torsional Path Integral Monte Carlo (TPIMC) technique applies an uncoupled winding number formalism to the torsional degrees of freedom in molecular systems. The internal energy of the molecules ethane, n-butane, n-octane, and enkephalin are calculated at standard temperature using the TPIMC technique and compared to the expectation values obtained using the harmonic oscillator approximation and a variational technique. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the TPIMC technique. The harmonic oscillator approximation approach to calculating the internal energy performs well for the molecules presented in this study but is limited by its neglect of both anharmonicity effects and the potential coupling of intramolecular torsions.

  18. A method to perform multi-scale Monte Carlo simulations in the clinical setting.

    PubMed

    Lucido, Joseph John; Popescu, I Antoniu; Moiseenko, Vitali

    2015-09-01

    In order to model the track structure of clinical mega-voltage photon beams in a reasonable time, it is necessary to use a multi-scale approach incorporating a track-structure algorithm for the regions of interest and a condensed history algorithm for the rest of the geometry. This paper introduces a multi-scale Monte Carlo system, which is used to hand off particle trajectory information between the two algorithms. Since condensed history algorithms ignore electrons with energy below a fixed threshold and those electrons are important to the track structure on the micrometre scale, it is necessary to hand over all charged particles to the track-structure algorithm only in a volume that extends beyond the scoring volume. Additionally, the system is validated against experimental results for radio-isotope gamma spectra. PMID:26242976

  19. A Markov-Chain Monte-Carlo Based Method for Flaw Detection in Beams

    SciTech Connect

    Glaser, R E; Lee, C L; Nitao, J J; Hickling, T L; Hanley, W G

    2006-09-28

    A Bayesian inference methodology using a Markov Chain Monte Carlo (MCMC) sampling procedure is presented for estimating the parameters of computational structural models. This methodology combines prior information, measured data, and forward models to produce a posterior distribution for the system parameters of structural models that is most consistent with all available data. The MCMC procedure is based upon a Metropolis-Hastings algorithm that is shown to function effectively with noisy data, incomplete data sets, and mismatched computational nodes/measurement points. A series of numerical test cases based upon a cantilever beam is presented. The results demonstrate that the algorithm is able to estimate model parameters utilizing experimental data for the nodal displacements resulting from specified forces.

  20. Monte Carlo assessment of soil moisture effect on high-energy thermal neutron capture gamma-ray by 14N.

    PubMed

    Pazirandeh, Ali; Azizi, Maryam; Farhad Masoudi, S

    2006-01-01

    Among many conventional techniques, nuclear techniques have shown to be faster, more reliable, and more effective in detecting explosives. In the present work, neutrons from a 5 Ci Am-Be neutron source being in water tank are captured by elements of soil and landmine (TNT), namely (14)N, H, C, and O. The prompt capture gamma-ray spectrum taken by a NaI (Tl) scintillation detector indicates the characteristic photo peaks of the elements in soil and landmine. In the high-energy region of the gamma-ray spectrum, besides 10.829 MeV of (15)N, single escape (SE) and double escape (DE) peaks are unmistakable photo peaks, which make the detection of concealed explosive possible. The soil has the property of moderating neutrons as well as diffusing the thermal neutron flux. Among many elements in soil, silicon is more abundant and (29)Si emits 10.607 MeV prompt capture gamma-ray, which makes 10.829 MeV detection difficult. The Monte Carlo simulation was used to adjust source-target-detector distances and soil moisture content to yield the best result. Therefore, we applied MCNP4C for configuration very close to reality of a hidden landmine in soil. PMID:16081298

  1. Stochastic extension of the Lanczos method for nuclear shell-model calculations with variational Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Shimizu, Noritaka; Mizusaki, Takahiro; Kaneko, Kazunari

    2013-06-01

    We propose a new variational Monte Carlo (VMC) approach based on the Krylov subspace for large-scale shell-model calculations. A random walker in the VMC is formulated with the M-scheme representation, and samples a small number of configurations from a whole Hilbert space stochastically. This VMC framework is demonstrated in the shell-model calculations of 48Cr and 60Zn, and we discuss its relation to a small number of Lanczos iterations. By utilizing the wave function obtained by the conventional particle-hole-excitation truncation as an initial state, this VMC approach provides us with a sequence of systematically improved results.

  2. A hybrid Markov chain Monte Carlo method for generating permeability fields conditioned to multiwell pressure data and prior information

    SciTech Connect

    Bonet-Cunha, L.; Oliver, D.S.; Redner, R.A.

    1996-12-31

    In order to properly evaluate the uncertainty in reservoir performance predictions, it is necessary to construct and sample the a posteriori probability density functions for the rock property fields. In this work, the a posteriori probability density function is constructed based on prior means and variograms (covariance function) for log-permeability and multiwell pressure data. Within the context of sampling the probability density function, we argue that the notion of equally probable realizations is the wrong paradigm for reservoir characterization. If the simulation of Gaussian random fields with a known variogram is the objective, it is shown that the variogram should not be incorporated directly into the objective function if simulated annealing is applied either to sample the a posteriori probability density function or to estimate a global minimum of the associated objective function. It is shown that the hybrid Markov chain Monte Carlo method provides a way to explore more fully the set of plausible log-permeability fields and does not suffer from the high rejection rates of more standard Markov chain Monte Carlo methods.

  3. Calculation of the TLD700:LiF energy response from Ir-192 using novel Monte Carlo and empirical methods.

    PubMed

    Rijken, J D; Harris-Phillips, W; Lawson, J M

    2015-03-01

    Lithium fluoride thermoluminescent dosimeters (TLDs) exhibit a dependence on the energy of the radiation beam of interest so need to be carefully calibrated for different energy spectra if used for clinical radiation oncology beam dosimetry and quality assurance. TLD energy response was investigated for a specific set of TLD700:LiF(Mg,Ti) chips for a high dose rate (192)Ir brachytherapy source. A novel method of energy response calculation for (192)Ir was developed where dose was determined through Monte Carlo modelling in Geant4. The TLD response was then measured experimentally. Results showed that TLD700 has a depth dependent response in water ranging from 1.170 ± 0.125 at 20 mm to 0.976 ± 0.043 at 50 mm (normalised to a nominal 6 MV beam response). The method of calibration and Monte Carlo data developed through this study could be easily applied by other Medical Physics departments seeking to use TLDs for (192)Ir patient dosimetry or treatment planning system experimental verification. PMID:25663432

  4. Evaluation of Reaction Rate Theory and Monte Carlo Methods for Application to Radiation-Induced Microstructural Characterization

    SciTech Connect

    Stoller, Roger E; Golubov, Stanislav I; Becquart, C. S.; Domain, C.

    2007-08-01

    The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory, Monte Carlo, or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales, and timescales from those characteristic of diffusion to long-term microstructural evolution (~s to years). Although the rate theory and Monte Carlo models can be used simulate the same phenomena, some of the details are handled quite differently in the two approaches. Models employing the rate theory have been extensively used to describe radiation-induced phenomena such as void swelling and irradiation creep. The primary approximations in such models are time- and spatial averaging of the radiation damage source term, and spatial averaging of the microstructure into an effective medium. Kinetic Monte Carlo models can account for these spatial and temporal correlations; their primary limitation is the computational burden which is related to the size of the simulation cell. A direct comparison of RT and object kinetic MC simulations has been made in the domain of point defect cluster dynamics modeling, which is relevant to the evolution (both nucleation and growth) of radiation-induced defect structures. The primary limitations of the OKMC model are related to computational issues. Even with modern computers, the maximum simulation cell size and the maximum dose (typically much less than 1 dpa) that can be simulated are limited. In contrast, even very detailed RT models can simulate microstructural evolution for doses up 100 dpa or greater in clock times that are relatively short. Within the context of the effective medium, essentially any defect density can be simulated. Overall, the agreement between the two methods is best for irradiation conditions which produce a high density of defects (lower temperature and higher displacement rate), and for materials that have a relatively high density of fixed sinks such as dislocations.

  5. Final Technical Report - Large Deviation Methods for the Analysis and Design of Monte Carlo Schemes in Physics and Chemistry - DE-SC0002413

    SciTech Connect

    Dupuis, Paul

    2014-03-14

    This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.

  6. Effectiveness of the Monte Carlo method in stereotactic radiation therapy applied to quasi-homogenous brain tumors.

    PubMed

    Kang, Ki Mun; Jeong, Bae Kwon; Choi, Hoon Sik; Song, Jin Ho; Park, Byung-Do; Lim, Young Kyung; Jeong, Hojin

    2016-03-15

    This study was aimed to evaluate the effectiveness of Monte Carlo (MC) method in stereotactic radiotherapy for brain tumor. The difference in doses predicted by the conventional Ray-tracing (Ray) and the advanced MC algorithms was comprehensively investigated through the simulations for phantom and patient data, actual measurement of dose distribution, and the retrospective analysis of 77 brain tumors patients. These investigations consistently showed that the MC algorithm overestimated the dose than the Ray algorithm and the MC overestimation was generally increased as decreasing the beams size and increasing the number of beams delivered. These results demonstrated that the advanced MC algorithm would be inaccurate than the conventional Raytracing algorithm when applied to a (quasi-) homogeneous brain tumors. Thus, caution may be needed to apply the MC method to brain radiosurgery or radiotherapy. PMID:26871473

  7. Method of Moments Modeling of Single Layer Microstrip Patch Antennas using GPU Acceleration and Quasi-Monte Carlo Integration

    NASA Astrophysics Data System (ADS)

    Cerjanic, Alexander M.

    The development of a spectral domain method of moments code for the modeling of single layer microstrip patch antennas is presented in this thesis. The mixed potential integral equation formulation of Maxwell's equations is used as the theoretical basis for the work, and is solved via the method of moments. General-purpose graphics processing units are used for the computation of the impedance matrix by incorporation of quasi-Monte Carlo integration. The development of the various components of the code, including Green's function, impedance matrix, and excitation vector modules are discussed with individual test cases for the major code modules. The integrated code was tested by modeling a suite of four coaxially probe fed circularly polarized single layer microstrip patch antennas and the computed results are compared to those obtained by measurement. Finally, a study examining the relationship between design parameters and S11 performance was undertaken using the code.

  8. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    SciTech Connect

    Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.

    2015-03-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.

  9. Application of Enhanced Sampling Monte Carlo Methods for High-Resolution Protein-Protein Docking in Rosetta

    PubMed Central

    Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin

    2015-01-01

    The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419

  10. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    NASA Astrophysics Data System (ADS)

    Densmore, J. D.; Park, H.; Wollaber, A. B.; Rauenzahn, R. M.; Knoll, D. A.

    2015-03-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption-emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck-Cummings algorithm.

  11. Non-destructive method of characterisation of radioactive waste containers using gamma spectroscopy and Monte Carlo techniques.

    PubMed

    Ridikas, D; Feray, S; Cometto, M; Damoy, F

    2005-01-01

    During the decommissioning of the SATURNE accelerator at CEA Saclay (France), a number of concrete containers with radioactive materials of low or very low activity had to be characterised before their final storage. In this paper, a non-destructive approach combining gamma ray spectroscopy and Monte Carlo simulations is used in order to characterise massive concrete blocks containing some radioactive waste. The limits and uncertainties of the proposed method are quantified for the source term activity estimates using 137Cs as a tracer element. A series of activity measurements with a few representative waste containers were performed before and after destruction. It has been found that neither was the distribution of radioactive materials homogeneous nor was its density unique, and this became the major source of systematic errors in this study. Nevertheless, we conclude that by combining gamma ray spectroscopy and full scale Monte Carlo simulations one can estimate the source term activity for some tracer elements such as 134Cs, 137Cs, 60Co, etc. The uncertainty of this estimation should not be bigger than a factor of 2-3. PMID:16381694

  12. Assessment of radiation shield integrity of DD/DT fusion neutron generator facilities by Monte Carlo and experimental methods

    NASA Astrophysics Data System (ADS)

    Srinivasan, P.; Priya, S.; Patel, Tarun; Gopalakrishnan, R. K.; Sharma, D. N.

    2015-01-01

    DD/DT fusion neutron generators are used as sources of 2.5 MeV/14.1 MeV neutrons in experimental laboratories for various applications. Detailed knowledge of the radiation dose rates around the neutron generators are essential for ensuring radiological protection of the personnel involved with the operation. This work describes the experimental and Monte Carlo studies carried out in the Purnima Neutron Generator facility of the Bhabha Atomic Research Center (BARC), Mumbai. Verification and validation of the shielding adequacy was carried out by measuring the neutron and gamma dose-rates at various locations inside and outside the neutron generator hall during different operational conditions both for 2.5-MeV and 14.1-MeV neutrons and comparing with theoretical simulations. The calculated and experimental dose rates were found to agree with a maximum deviation of 20% at certain locations. This study has served in benchmarking the Monte Carlo simulation methods adopted for shield design of such facilities. This has also helped in augmenting the existing shield thickness to reduce the neutron and associated gamma dose rates for radiological protection of personnel during operation of the generators at higher source neutron yields up to 1 × 1010 n/s.

  13. GPU-accelerated inverse identification of radiative properties of particle suspensions in liquid by the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ma, C. Y.; Zhao, J. M.; Liu, L. H.; Zhang, L.; Li, X. C.; Jiang, B. C.

    2016-03-01

    Inverse identification of radiative properties of participating media is usually time consuming. In this paper, a GPU accelerated inverse identification model is presented to obtain the radiative properties of particle suspensions. The sample medium is placed in a cuvette and a narrow light beam is irradiated normally from the side. The forward three-dimensional radiative transfer problem is solved using a massive parallel Monte Carlo method implemented on graphics processing unit (GPU), and particle swarm optimization algorithm is applied to inversely identify the radiative properties of particle suspensions based on the measured bidirectional scattering distribution function (BSDF). The GPU-accelerated Monte Carlo simulation significantly reduces the solution time of the radiative transfer simulation and hence greatly accelerates the inverse identification process. Hundreds of speedup is achieved as compared to the CPU implementation. It is demonstrated using both simulated BSDF and experimentally measured BSDF of microalgae suspensions that the radiative properties of particle suspensions can be effectively identified based on the GPU-accelerated algorithm with three-dimensional radiative transfer modelling.

  14. Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange

    PubMed Central

    Hula, Andreas; Montague, P. Read; Dayan, Peter

    2015-01-01

    Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent’s preference for equity with their partner, beliefs about the partner’s appetite for equity, beliefs about the partner’s model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429

  15. Time series analysis and Monte Carlo methods for eigenvalue separation in neutron multiplication problems

    SciTech Connect

    Nease, Brian R. Ueki, Taro

    2009-12-10

    A time series approach has been applied to the nuclear fission source distribution generated by Monte Carlo (MC) particle transport in order to calculate the non-fundamental mode eigenvalues of the system. The novel aspect is the combination of the general technical principle of projection pursuit for multivariate data with the neutron multiplication eigenvalue problem in the nuclear engineering discipline. Proof is thoroughly provided that the stationary MC process is linear to first order approximation and that it transforms into one-dimensional autoregressive processes of order one (AR(1)) via the automated choice of projection vectors. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern MC codes for nuclear criticality calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. This time series approach was tested for a variety of problems including multi-dimensional ones. Numerical results show that the time series approach has strong potential for three dimensional whole reactor core. The eigenvalue ratio can be updated in an on-the-fly manner without storing the nuclear fission source distributions at all previous iteration cycles for the mean subtraction. Lastly, the effects of degenerate eigenvalues are investigated and solutions are provided.

  16. Estimation of Measurement Uncertainty of kinematic TLS Observation Process by means of Monte-Carlo Methods

    NASA Astrophysics Data System (ADS)

    Alkhatib, Hamza; Kutterer, Hansjörg

    2013-05-01

    In many cases, the uncertainty of output quantities may be computed by assuming that the distribution represented by the result of measurement and its associated standard uncertainty is a Gaussian. This assumption may be unjustified and the uncertainty of the output quantities determined in this way may be incorrect. One tool to deal with different distribution functions of the input parameters and the resulting mixed-distribution of the output quantities is given through the Monte Carlo techniques. The resulting empirical distribution can be used to approximate the theoretical distribution of the output quantities. All required moments of different orders can then be numerically determined. To evaluate the procedure of derivation and evaluation of output parameter uncertainties outlined in this paper, a case study of kinematic terrestrial laserscanning (k-TLS) will be discussed. This study deals with two main topics: the refined simulation of different configurations by taking different input parameters with diverse probability functions for the uncertainty model into account, and the statistical analysis of the real data in order to improve the physical observation models in case of k-TLS. The solution of both problems is essential for the highly sensitive and physically meaningful application of k-TLS techniques for monitoring of, e. g., large structures such as bridges.

  17. Efficient 3D kinetic Monte Carlo method for modeling of molecular structure and dynamics.

    PubMed

    Panshenskov, Mikhail; Solov'yov, Ilia A; Solov'yov, Andrey V

    2014-06-30

    Self-assembly of molecular systems is an important and general problem that intertwines physics, chemistry, biology, and material sciences. Through understanding of the physical principles of self-organization, it often becomes feasible to control the process and to obtain complex structures with tailored properties, for example, bacteria colonies of cells or nanodevices with desired properties. Theoretical studies and simulations provide an important tool for unraveling the principles of self-organization and, therefore, have recently gained an increasing interest. The present article features an extension of a popular code MBN EXPLORER (MesoBioNano Explorer) aiming to provide a universal approach to study self-assembly phenomena in biology and nanoscience. In particular, this extension involves a highly parallelized module of MBN EXPLORER that allows simulating stochastic processes using the kinetic Monte Carlo approach in a three-dimensional space. We describe the computational side of the developed code, discuss its efficiency, and apply it for studying an exemplary system. PMID:24752427

  18. Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange.

    PubMed

    Hula, Andreas; Montague, P Read; Dayan, Peter

    2015-06-01

    Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent's preference for equity with their partner, beliefs about the partner's appetite for equity, beliefs about the partner's model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429

  19. Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin

    2015-07-01

    The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)

  20. A Monte Carlo boundary propagation method for the solution of Poisson's equation

    SciTech Connect

    Hiromoto, R.; Brickner, R.G.

    1990-01-01

    To often the parallelism of a computational algorithm is used (or advertised) as a desirable measure of its performance. That is, the higher the computational parallelism the better the expected performance. With the current interest and emphasis on massively parallel computer systems, the notion of highly parallel algorithms is the subject of many conferences and funding proposals. Unfortunately, the revolution'' that this vision promises has served to further complicate the measure of parallel performance by the introduction of such notions as scaled speedup and scalable systems. As a counter example to the merits of highly parallel algorithms whose parallelism scales linearly with increasing problem size, we introduce a slight modification to a highly parallel Monte Carlo technique that is used to estimate the solution of Poisson's equation. This simple modification is shown to yield a much better estimate to the solution by incorporating a more efficient use of boundary data (Dirichlet boundary conditions). A by product of this new algorithm is a much more efficient sequential algorithm but at the expense of sacrificing parallelism. 3 refs.

  1. Modelling of white paints optical degradation using Mie's theory and Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Duvignacq, Carole; Hespel, Laurent; Roze, Claude; Girasole, Thierry

    2003-09-01

    During long term missions, white paints, used as thermal control coatings on satellites, are severely damaged by the effect of space environment. Reflectance spectra, showing broad absorption bands, are characteristic of the coatings optical degradation. In this paper, a numerical model simulating optical degradation of white paints is presented. This model uses Mie's theory, coupled with a random walk Monte Carlo procedure. With materials like white paints, we are faced to several major difficulties: high pigment charging rate, binder absorption, etc.. The problem is even worse in the case of irradiated paints. In parallel with the description of the basis of the model, we will make an overview of the encountered problems. Simulation results are presented and discussed, in the case of zinc oxide/PDMS type white paints, irradiated by 45 keV protons, in accordance with geostationary orbit environment conditions. The effects of the optical properties of the pigment, the pigment volume concentration, the absorption by the binder on hemispherical reflectance are examined. Comparisons are made with experimental results, and the interest of such a numerical code for the study of high charged materials degradation is discussed.

  2. DSMC calculations for the delta wing. [Direct Simulation Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Celenligil, M. Cevdet; Moss, James N.

    1990-01-01

    Results are reported from three-dimensional direct simulation Monte Carlo (DSMC) computations, using a variable-hard-sphere molecular model, of hypersonic flow on a delta wing. The body-fitted grid is made up of deformed hexahedral cells divided into six tetrahedral subcells with well defined triangular faces; the simulation is carried out for 9000 time steps using 150,000 molecules. The uniform freestream conditions include M = 20.2, T = 13.32 K, rho = 0.00001729 kg/cu m, and T(wall) = 620 K, corresponding to lambda = 0.00153 m and Re = 14,000. The results are presented in graphs and briefly discussed. It is found that, as the flow expands supersonically around the leading edge, an attached leeside flow develops around the wing, and the near-surface density distribution has a maximum downstream from the stagnation point. Coefficients calculated include C(H) = 0.067, C(DP) = 0.178, C(DF) = 0.110, C(L) = 0.714, and C(D) = 1.089. The calculations required 56 h of CPU time on the NASA Langley Voyager CRAY-2 supercomputer.

  3. Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods

    DOE PAGESBeta

    Hehr, Brian Douglas

    2014-11-25

    The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials.more » The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.« less

  4. Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods

    SciTech Connect

    Hehr, Brian Douglas

    2014-11-25

    The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials. The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.

  5. Verification by Monte Carlo methods of a power law tissue-air ratio algorithm for inhomogeneity corrections in photon beam dose calculations.

    PubMed

    Webb, S; Fox, R A

    1980-03-01

    A Monte Carlo computer program has been used to calculate axial and off-axis depth dose distributions arising from the interaction of an external beam of 60Co radiation with a medium containing inhomogeneities. An approximation for applying the Monte Carlo data to the configuration where the lateral extent of the inhomogeneity is less than the beam area, is also presented. These new Monte Carlo techniques rely on integration over the dose distributions from constituent sub-beams of small area and the accuracy of the method is thus independent of beam size. The power law correction equation (Batho equation) describing the dose distribution in the presence of tissue inhomogeneities is derived in its most general form. By comparison with Monte Carlo reference data, the equation is validated for routine patient dosimetry. It is explained why the Monte Carlo data may be regarded as a fundamental reference point in performing these tests of the extension to the Batho equation. Other analytic correction techniques, e.g. the equivalent radiological path method, are shown to be less accurate. The application of the generalised power law equation in conjunction with CT scanner data is discussed. For ease of presentation, the details of the Monte Carlo techniques and the analytic formula have been separated into appendices. PMID:7384209

  6. Monte Carlo burnup code acceleration with the correlated sampling method. Preliminary test on an UOX cell with TRIPOLI-4{sup R}

    SciTech Connect

    Dieudonne, C.; Dumonteil, E.; Malvagi, F.; Diop, C. M.

    2013-07-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple a Monte Carlo code to simulate the neutron transport to a deterministic method that computes the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3 dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the time-expensive Monte Carlo solver called at each time step. Therefore, great improvements in term of calculation time could be expected if one could get rid of Monte Carlo transport sequences. For example, it may seem interesting to run an initial Monte Carlo simulation only once, for the first time/burnup step, and then to use the concentration perturbation capability of the Monte Carlo code to replace the other time/burnup steps (the different burnup steps are seen like perturbations of the concentrations of the initial burnup step). This paper presents some advantages and limitations of this technique and preliminary results in terms of speed up and figure of merit. Finally, we will detail different possible calculation scheme based on that method. (authors)

  7. BRIEF REPORT: A calculation of the position of the quasi-β and quasi-γ bands for the transitional nucleus 154Dy with Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Puddu, G.

    2005-07-01

    A calculation of the excitation energy of the 0+ states and of the 2+ states is performed using Monte Carlo methods for the nucleus 154Dy. The Hamiltonian is assumed to be a monopole+quadrupole pairing+quadrupole with the parameters fixed by the spectroscopic Monte Carlo method so as to reproduce the experimental excitation energies of the yrast states up to J = 8 within the 50 82 and 82 126 proton and neutron major shells. The resulting Hamiltonian has been diagonalized in the J = 0 and J = 2 subspaces using the quantum Monte Carlo method. The size of the basis is fixed by comparing the yrast energies obtained with the basis-independent spectroscopic Monte Carlo method, and those obtained with the quantum Monte Carlo method. The excitation energy of the 0+2 is much higher than the experimental value. The structure of the 0+2,3 and of the 2+2,3 eigenstates is discussed in terms of fluctuating intrinsic states and resolved in terms of the deformation variables.

  8. Adjoint Monte Carlo method for prostate external photon beam treatment planning: an application to 3D patient anatomy

    NASA Astrophysics Data System (ADS)

    Wang, Brian; Goldstein, Moshe; Xu, X. George; Sahoo, Narayan

    2005-03-01

    Recently, the theoretical framework of the adjoint Monte Carlo (AMC) method has been developed using a simplified patient geometry. In this study, we extended our previous work by applying the AMC framework to a 3D anatomical model called VIP-Man constructed from the Visible Human images. First, the adjoint fluxes for the prostate (PTV) and rectum and bladder (organs at risk (OARs)) were calculated on a spherical surface of 1 m radius, centred at the centre of gravity of PTV. An importance ratio, defined as the PTV dose divided by the weighted OAR doses, was calculated for each of the available beamlets to select the beam angles. Finally, the detailed doses in PTV and OAR were calculated using a forward Monte Carlo simulation to include the electron transport. The dose information was then used to generate dose volume histograms (DVHs). The Pinnacle treatment planning system was also used to generate DVHs for the 3D plans with beam angles obtained from the AMC (3D-AMC) and a standard six-field conformal radiation therapy plan (3D-CRT). Results show that the DVHs for prostate from 3D-AMC and the standard 3D-CRT are very similar, showing that both methods can deliver prescribed dose to the PTV. A substantial improvement in the DVHs for bladder and rectum was found for the 3D-AMC method in comparison to those obtained from 3D-CRT. However, the 3D-AMC plan is less conformal than the 3D-CRT plan because only bladder, rectum and PTV are considered for calculating the importance ratios. Nevertheless, this study clearly demonstrated the feasibility of the AMC in selecting the beam directions as a part of a treatment planning based on the anatomical information in a 3D and realistic patient anatomy.

  9. Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124.

    PubMed

    Moreau, M; Buvat, I; Ammour, L; Chouin, N; Kraeber-Bodéré, F; Chérel, M; Carlier, T

    2015-03-21

    Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM. PMID:25739884

  10. Monte Carlo Method in optical diagnostics of skin and skin tissues

    NASA Astrophysics Data System (ADS)

    Meglinski, Igor V.

    2003-12-01

    A novel Monte Carlo (MC) technique for photon migration through 3D media with the spatially varying optical properties is presented. The employed MC technique combines the statistical weighting variance reduction and real photon paths tracing schemes. The overview of the results of applications of the developed MC technique in optical/near-infrared reflectance spectroscopy, confocal microscopy, fluorescence spectroscopy, OCT, Doppler flowmetry and Diffusing Wave Spectroscopy (DWS) are presented. In frame of the model skin represents as a complex inhomogeneous multi-layered medium, where the spatial distribution of blood and chromophores are variable within the depth. Taking into account variability of cells structure we represent the interfaces of skin layers as a quasi-random periodic wavy surfaces. The rough boundaries between the layers of different refractive indices play a significant role in the distribution of photons within the medium. The absorption properties of skin tissues in visible and NIR spectral region are estimated by taking into account the anatomical structure of skin as determined from histology, including the spatial distribution of blood vessels, water and melanin content. Model takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. Reasonable estimations for skin blood oxygen saturation and haematocrit are also included. The model is validated against analytic solution of the photon diffusion equation for semi-infinite homogeneous highly scattering medium. The results demonstrate that matching of the refractive index of the medium significantly improves the contrast and spatial resolution of the spatial photon sensitivity profile. It is also demonstrated that when model supplied with reasonable physical and structural parameters of biological tissues the results of skin reflectance spectra simulation agrees reasonably well with the results of in vivo skin spectra measurements.

  11. Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124

    NASA Astrophysics Data System (ADS)

    Moreau, M.; Buvat, I.; Ammour, L.; Chouin, N.; Kraeber-Bodéré, F.; Chérel, M.; Carlier, T.

    2015-03-01

    Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM.

  12. Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    SciTech Connect

    Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V

    2006-12-31

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)

  13. Micro-computed tomography-guided, non-equal voxel Monte Carlo method for reconstruction of fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Quan, Guotao; Wang, Kan; Yang, Xiaoquan; Deng, Yong; Luo, Qingming; Gong, Hui

    2012-08-01

    The study of dual-modality technology which combines microcomputed tomography (micro-CT) and fluorescence molecular tomography (FMT) has become one of the main focuses in FMT. However, because of the diversity of the optical properties and irregular geometry for small animals, a reconstruction method that can effectively utilize the high-resolution structural information of micro-CT for tissue with arbitrary optical properties is still one of the most challenging problems in FMT. We develop a micro-CT-guided non-equal voxel Monte Carlo method for FMT reconstruction. With the guidance of micro-CT, precise voxel binning can be conducted on the irregular boundary or region of interest. A modified Laplacian regularization method is also proposed to accurately reconstruct the distribution of the fluorescent yield for non-equal space voxels. Simulations and phantom experiments show that this method not only effectively reduces the loss of high-resolution structural information of micro-CT in irregular boundaries and increases the accuracy of the FMT algorithm in both forward and inverse problems, but the method also has a small Jacobian matrix and a short reconstruction time. At last, we performed small animal imaging to validate our method.

  14. Binding and Diffusion of Lithium in Graphite: Quantum Monte Carlo Benchmarks and Validation of van der Waals Density Functional Methods.

    PubMed

    Ganesh, P; Kim, Jeongnim; Park, Changwon; Yoon, Mina; Reboredo, Fernando A; Kent, Paul R C

    2014-12-01

    Highly accurate diffusion quantum Monte Carlo (QMC) studies of the adsorption and diffusion of atomic lithium in AA-stacked graphite are compared with van der Waals-including density functional theory (DFT) calculations. Predicted QMC lattice constants for pure AA graphite agree with experiment. Pure AA-stacked graphite is shown to challenge many van der Waals methods even when they are accurate for conventional AB graphite. Highest overall DFT accuracy, considering pure AA-stacked graphite as well as lithium binding and diffusion, is obtained by the self-consistent van der Waals functional vdW-DF2, although errors in binding energies remain. Empirical approaches based on point charges such as DFT-D are inaccurate unless the local charge transfer is assessed. The results demonstrate that the lithium-carbon system requires a simultaneous highly accurate description of both charge transfer and van der Waals interactions, favoring self-consistent approaches. PMID:26583215

  15. Simulation of Mach-Effect Illusion Using Three-Layered Retinal Cell Model and Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Ueno, Akinori; Arai, Ken; Miyashita, Osamu

    We proposed a novel retinal model capable of simulating Mach-effect, which is known as an optical illusion emphasizing edges of an object. The model was constructed by a rod cell layer, a bipolar cell layer, and a ganglion cell layer. Lateral inhibition and perceptive field networks were introduced between the layers, respectively. Photoelectric conversion for a single photon incidence at each rod cell was defined as an equation, and the input to the model was simulated as a distribution of transmitted photons through the input image for consecutive incidences by Monte Carlo method. Since this model successfully simulated not only Mach-effect illusion, but also DOG-like (Difference of Gaussian like) profile for a spot light incidence, the model was considered to form functionally the perceptive field of the retinal ganglion cell.

  16. Simulation of energy absorption spectrum in NaI crystal detector for multiple gamma energy using Monte Carlo method

    SciTech Connect

    Wirawan, Rahadi; Waris, Abdul; Djamal, Mitra; Handayani, Gunawan

    2015-04-16

    The spectrum of gamma energy absorption in the NaI crystal (scintillation detector) is the interaction result of gamma photon with NaI crystal, and it’s associated with the photon gamma energy incoming to the detector. Through a simulation approach, we can perform an early observation of gamma energy absorption spectrum in a scintillator crystal detector (NaI) before the experiment conducted. In this paper, we present a simulation model result of gamma energy absorption spectrum for energy 100-700 keV (i.e. 297 keV, 400 keV and 662 keV). This simulation developed based on the concept of photon beam point source distribution and photon cross section interaction with the Monte Carlo method. Our computational code has been successfully predicting the multiple energy peaks absorption spectrum, which derived from multiple photon energy sources.

  17. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Nasser, Hassan; Marre, Olivier; Cessac, Bruno

    2013-03-01

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.

  18. Improved multi-variable variational Monte Carlo method examined by high-precision calculations of one-dimensional Hubbard model

    NASA Astrophysics Data System (ADS)

    Kaneko, Ryui; Morita, Satoshi; Imada, Masatoshi

    2013-08-01

    We revisit the accuracy of the variational Monte Carlo (VMC) method by taking an example of ground state properties for the one-dimensional Hubbard model. We start from the variational wave functions with the Gutzwiller and long-range Jastrow factor introduced by Capello et al. [Phys. Rev. B 72, 085121 (2005)] and further improve it by considering several quantum-number projections and a generalized one-body wave function. We find that the quantum spin projection and total momentum projection greatly improve the accuracy of the ground state energy within 0.5% error, for both small and large systems at half filling. Besides, the momentum distribution function n(k) at quarter filling calculated up to 196 sites allows us direct estimate of the critical exponents of the charge correlations from the power-law behavior of n(k) near the Fermi wave vector. Estimated critical exponents well reproduce those predicted by the Tomonaga-Luttinger theory.

  19. Quantum Monte Carlo method for pairing phenomena: Supercounterfluid of two-species Bose gases in optical lattices

    SciTech Connect

    Ohgoe, Takahiro; Kawashima, Naoki

    2011-02-15

    We study the supercounterfluid (SCF) states in the two-component hard-core Bose-Hubbard model on a square lattice, using the quantum Monte Carlo method based on the worm (directed-loop) algorithm. Since the SCF state is a state of a pair condensation characterized by {ne}0,=0, and =0, where a and b are the order parameters of the two components, it is important to study behaviors of the pair-correlation function . For this purpose, we propose a choice of the worm head for calculating the pair-correlation function. From this pair correlation, we confirm the Kosterlitz-Thouless character of the SCF phase. The simulation efficiency is also improved in the SCF phase.

  20. Uncertainty Determination for Aeroheating in Uranus and Saturn Probe Entries by the Monte Carlo Method

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Prabhu, Dinesh; Cruden, Brett A.

    2013-01-01

    The 2013-2022 Decaedal survey for planetary exploration has identified probe missions to Uranus and Saturn as high priorities. This work endeavors to examine the uncertainty for determining aeroheating in such entry environments. Representative entry trajectories are constructed using the TRAJ software. Flowfields at selected points on the trajectories are then computed using the Data Parallel Line Relaxation (DPLR) Computational Fluid Dynamics Code. A Monte Carlo study is performed on the DPLR input parameters to determine the uncertainty in the predicted aeroheating, and correlation coefficients are examined to identify which input parameters show the most influence on the uncertainty. A review of the present best practices for input parameters (e.g. transport coefficient and vibrational relaxation time) is also conducted. It is found that the 2(sigma) - uncertainty for heating on Uranus entry is no more than 2.1%, assuming an equilibrium catalytic wall, with the uncertainty being determined primarily by diffusion and H(sub 2) recombination rate within the boundary layer. However, if the wall is assumed to be partially or non-catalytic, this uncertainty may increase to as large as 18%. The catalytic wall model can contribute over 3x change in heat flux and a 20% variation in film coefficient. Therefore, coupled material response/fluid dynamic models are recommended for this problem. It was also found that much of this variability is artificially suppressed when a constant Schmidt number approach is implemented. Because the boundary layer is reacting, it is necessary to employ self-consistent effective binary diffusion to obtain a correct thermal transport solution. For Saturn entries, the 2(sigma) - uncertainty for convective heating was less than 3.7%. The major uncertainty driver was dependent on shock temperature/velocity, changing from boundary layer thermal conductivity to diffusivity and then to shock layer ionization rate as velocity increases. While radiative heating for Uranus entry was negligible, the nominal solution for Saturn computed up to 20% radiative heating at the highest velocity examined. The radiative heating followed a non-normal distribution, with up to a 3x variation in magnitude. This uncertainty is driven by the H(sub 2) dissociation rate, as H(sub 2) that persists in the hot non-equilibrium zone contributes significantly to radiation.

  1. A method for photon beam Monte Carlo multileaf collimator particle transport

    NASA Astrophysics Data System (ADS)

    Siebers, Jeffrey V.; Keall, Paul J.; Kim, Jong Oh; Mohan, Radhe

    2002-09-01

    Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/-1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/-1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/-1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry.

  2. Monte Carlo comparison of preliminary methods for ordering multiple genetic loci.

    PubMed Central

    Olson, J M; Boehnke, M

    1990-01-01

    We carried out a simulation study to compare the power of eight methods for preliminary ordering of multiple genetic loci. Using linkage groups of six loci and a simple pedigree structure, we considered the effects on method performance of locus informativity, interlocus spacing, total distance along the chromosome, and sample size. Method performance was assessed using the mean rank of the true order, the proportion of replicates in which the true order was the best order, and the number of orders that needed to be considered for subsequent multipoint linkage analysis in order to include the true order with high probability. A new method which maximizes the sum of adjacent two-point maximum lod scores divided by the equivalent number of informative meioses and the previously described method which minimizes the sum of adjacent recombination fraction estimates were found to be the best overall locus-ordering methods for the situations considered, although several other methods also performed well. PMID:2393021

  3. Radiative Transfer Modeling of a Large Pool Fire by Discrete Ordinates, Discrete Transfer, Ray Tracing, Monte Carlo and Moment Methods

    NASA Technical Reports Server (NTRS)

    Jensen, K. A.; Ripoll, J.-F.; Wray, A. A.; Joseph, D.; ElHafi, M.

    2004-01-01

    Five computational methods for solution of the radiative transfer equation in an absorbing-emitting and non-scattering gray medium were compared on a 2 m JP-8 pool fire. The temperature and absorption coefficient fields were taken from a synthetic fire due to the lack of a complete set of experimental data for fires of this size. These quantities were generated by a code that has been shown to agree well with the limited quantity of relevant data in the literature. Reference solutions to the governing equation were determined using the Monte Carlo method and a ray tracing scheme with high angular resolution. Solutions using the discrete transfer method, the discrete ordinate method (DOM) with both S(sub 4) and LC(sub 11) quadratures, and moment model using the M(sub 1) closure were compared to the reference solutions in both isotropic and anisotropic regions of the computational domain. DOM LC(sub 11) is shown to be the more accurate than the commonly used S(sub 4) quadrature technique, especially in anisotropic regions of the fire domain. This represents the first study where the M(sub 1) method was applied to a combustion problem occurring in a complex three-dimensional geometry. The M(sub 1) results agree well with other solution techniques, which is encouraging for future applications to similar problems since it is computationally the least expensive solution technique. Moreover, M(sub 1) results are comparable to DOM S(sub 4).

  4. Development of synthetic velocity - depth damage curves using a Weighted Monte Carlo method and Logistic Regression analysis

    NASA Astrophysics Data System (ADS)

    Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.

    2014-05-01

    Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution provided a flow velocity-depth damage curve for a specific land use. More specifically, each WMCLR code execution for the agricultural sector generated a damage curve for a specific crop and for every month of the year, thus relating the damage to any crop with floodwater depth, flow velocity and the growth phase of the crop at the time of flooding. Respectively, each WMCLR code execution for the urban sector developed a damage curve for a specific building type, relating structural damage with floodwater depth and velocity. Furthermore, two techno-economic models were developed in Python programming language, in order to estimate monetary values of flood damages to the rural and the urban sector, respectively. A new Monte Carlo simulation was performed, consisting of multiple executions of the techno-economic code, which generated multiple damage cost estimates. Each execution used the proper WMCLR simulated damage curve. The uncertainty analysis of the damage estimates established the accuracy and reliability of the proposed methodology for the synthetic damage curves' development.

  5. Multimodal nested sampling: an efficient and robust alternative to Markov Chain Monte Carlo methods for astronomical data analyses

    NASA Astrophysics Data System (ADS)

    Feroz, F.; Hobson, M. P.

    2008-02-01

    In performing a Bayesian analysis of astronomical data, two difficult problems often emerge. First, in estimating the parameters of some model for the data, the resulting posterior distribution may be multimodal or exhibit pronounced (curving) degeneracies, which can cause problems for traditional Markov Chain Monte Carlo (MCMC) sampling methods. Secondly, in selecting between a set of competing models, calculation of the Bayesian evidence for each model is computationally expensive using existing methods such as thermodynamic integration. The nested sampling method introduced by Skilling, has greatly reduced the computational expense of calculating evidence and also produces posterior inferences as a by-product. This method has been applied successfully in cosmological applications by Mukherjee, Parkinson & Liddle, but their implementation was efficient only for unimodal distributions without pronounced degeneracies. Shaw, Bridges & Hobson recently introduced a clustered nested sampling method which is significantly more efficient in sampling from multimodal posteriors and also determines the expectation and variance of the final evidence from a single run of the algorithm, hence providing a further increase in efficiency. In this paper, we build on the work of Shaw et al. and present three new methods for sampling and evidence evaluation from distributions that may contain multiple modes and significant degeneracies in very high dimensions; we also present an even more efficient technique for estimating the uncertainty on the evaluated evidence. These methods lead to a further substantial improvement in sampling efficiency and robustness, and are applied to two toy problems to demonstrate the accuracy and economy of the evidence calculation and parameter estimation. Finally, we discuss the use of these methods in performing Bayesian object detection in astronomical data sets, and show that they significantly outperform existing MCMC techniques. An implementation of our methods will be publicly released shortly.

  6. An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates

    NASA Astrophysics Data System (ADS)

    Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin

    2014-03-01

    The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.

  7. Can You Repeat That Please?: Using Monte Carlo Simulation in Graduate Quantitative Research Methods Classes

    ERIC Educational Resources Information Center

    Carsey, Thomas M.; Harden, Jeffrey J.

    2015-01-01

    Graduate students in political science come to the discipline interested in exploring important political questions, such as "What causes war?" or "What policies promote economic growth?" However, they typically do not arrive prepared to address those questions using quantitative methods. Graduate methods instructors must…

  8. Hybrid Monte Carlo method for off-lattice simulation of processes involving steps with widely varying rates

    SciTech Connect

    Clark, M.M.; Raff, L.M.; Scott, H.L.

    1996-11-01

    A hybrid Monte Carlo simulation method consisting of kinetic and equilibrium moves is presented. The method is applicable to any system involving {ital N} different processes whose rates may vary by orders of magnitude. We show that the method easily permits the propagation of the system through macroscopic time scales by application to a study of chemical vapor deposition diamond-film growth. The application is an off-lattice simulation utilizing CH{sub 3} as the growth species on a diamond [111] surface, and incorporating a computationally expensive, many-body potential surface. We show that, by using our method, it is possible to simulate a continuous system of several thousand atoms, with no underlying grid and with a realistic potential for times on the order of milliseconds. The growth rate of the simulated surface is consistent with experimental growth rates, and the simulation sheds light on possible morphologies during the early stages of diamond film formation. {copyright} {ital 1996 American Institute of Physics.}

  9. Post-DFT methods for Earth materials: Quantum Monte Carlo simulations of (Mg,Fe)O (Invited)

    NASA Astrophysics Data System (ADS)

    Driver, K. P.; Militzer, B.; Cohen, R. E.

    2013-12-01

    (Mg,Fe)O is a major mineral phase in Earth's lower mantle that plays a key role in determining the structural and dynamical properties of deep Earth. A pressure-induced spin-pairing transition of Fe has been the subject of numerous theoretical and experimental studies due to the consequential effects on lower mantle physics. The standard density functional theory (DFT) method does not treat strongly correlated electrons properly and results can have dependence on the choice of exchange-correlation functional. DFT+U, offers significant improvement over standard DFT for treating strongly correlated electrons. Indeed, DFT+U calculations and experiments have narrowed the ambient spin-transition between 40-60 GPa in (Mg,Fe)O. However, DFT+U, is not an ideal method due to dependence on Hubbard U parameter among other approximations. In order to further clarify details of the spin transition, it is necessary to use methods that explicitly treat effects of electron exchange and correlation, such as quantum Monte Carlo (QMC). Here, we will discuss methods of going beyond standard DFT and present QMC results on the (Mg,Fe)O elastic properties and spin-transition pressure in order to benchmark DFT+U results.

  10. A hybrid kinetic Monte Carlo method for simulating silicon films grown by plasma-enhanced chemical vapor deposition

    NASA Astrophysics Data System (ADS)

    Tsalikis, D. G.; Baig, C.; Mavrantzas, V. G.; Amanatides, E.; Mataras, D.

    2013-11-01

    We present a powerful kinetic Monte Carlo (KMC) algorithm that allows one to simulate the growth of nanocrystalline silicon by plasma enhanced chemical vapor deposition (PECVD) for film thicknesses as large as several hundreds of monolayers. Our method combines a standard n-fold KMC algorithm with an efficient Markovian random walk scheme accounting for the surface diffusive processes of the species involved in PECVD. These processes are extremely fast compared to chemical reactions, thus in a brute application of the KMC method more than 99% of the computational time is spent in monitoring them. Our method decouples the treatment of these events from the rest of the reactions in a systematic way, thereby dramatically increasing the efficiency of the corresponding KMC algorithm. It is also making use of a very rich kinetic model which includes 5 species (H, SiH3, SiH2, SiH, and Si2H5) that participate in 29 reactions. We have applied the new method in simulations of silicon growth under several conditions (in particular, silane fraction in the gas mixture), including those usually realized in actual PECVD technologies. This has allowed us to directly compare against available experimental data for the growth rate, the mesoscale morphology, and the chemical composition of the deposited film as a function of dilution ratio.

  11. A hybrid kinetic Monte Carlo method for simulating silicon films grown by plasma-enhanced chemical vapor deposition.

    PubMed

    Tsalikis, D G; Baig, C; Mavrantzas, V G; Amanatides, E; Mataras, D

    2013-11-28

    We present a powerful kinetic Monte Carlo (KMC) algorithm that allows one to simulate the growth of nanocrystalline silicon by plasma enhanced chemical vapor deposition (PECVD) for film thicknesses as large as several hundreds of monolayers. Our method combines a standard n-fold KMC algorithm with an efficient Markovian random walk scheme accounting for the surface diffusive processes of the species involved in PECVD. These processes are extremely fast compared to chemical reactions, thus in a brute application of the KMC method more than 99% of the computational time is spent in monitoring them. Our method decouples the treatment of these events from the rest of the reactions in a systematic way, thereby dramatically increasing the efficiency of the corresponding KMC algorithm. It is also making use of a very rich kinetic model which includes 5 species (H, SiH3, SiH2, SiH, and Si2H5) that participate in 29 reactions. We have applied the new method in simulations of silicon growth under several conditions (in particular, silane fraction in the gas mixture), including those usually realized in actual PECVD technologies. This has allowed us to directly compare against available experimental data for the growth rate, the mesoscale morphology, and the chemical composition of the deposited film as a function of dilution ratio. PMID:24289368

  12. Multi-determinant electron-nuclear quantum Monte Carlo method for ground state solution of molecular Hamiltonian

    NASA Astrophysics Data System (ADS)

    Sambasivam, Abhinanden; Elward, Jennifer; Chakraborty, Arindam

    2013-03-01

    The focus of this work is to obtain the ground state energy of the non-relativistic spin-independent molecular Hamiltonian without making the Born-Oppenheimer (BO) approximation. In addition to avoiding the BO approximation, this approach avoids imposing separable-rotor and harmonic oscillator approximations. The ground state solution is obtained variationally using multi-determinant variational Monte Carlo method where all nuclei and electrons in the molecule are treated quantum mechanically. The multi-determinant VMC provides the right framework for including explicit correlation in a multi-determinant expansion. This talk will discuss the construction of the basis functions and optimization of the variational coefficient. The electron-nuclear VMC method will be applied to H2, He2 and H2O and comparison of the VMC results with other methods will be presented. The results from these calculations will provide the necessary benchmark values that are needed in development of other multicomponent method such as electron-nuclear DFT and electron-nuclear FCIQMC.

  13. New method to perform dosimetric quality control of treatment planning system using PENELOPE Monte Carlo and anatomical digital test objects

    NASA Astrophysics Data System (ADS)

    Benhdech, Yassine; Beaumont, Stéphane; Guédon, Jean-Pierre; Torfeh, Tarraf

    2010-04-01

    In this paper, we deepen the R&D program named DTO-DC (Digital Object Test and Dosimetric Console), which goal is to develop an efficient, accurate and full method to achieve dosimetric quality control (QC) of radiotherapy treatment planning system (TPS). This method is mainly based on Digital Test Objects (DTOs) and on Monte Carlo (MC) simulation using the PENELOPE code [1]. These benchmark simulations can advantageously replace experimental measures typically used as reference for comparison with TPS calculated dose. Indeed, the MC simulations rather than dosimetric measurements allow contemplating QC without tying treatment devices and offer in many situations (i.p. heterogeneous medium, lack of scattering volume...) better accuracy compared to dose measurements with classical dosimetry equipment of a radiation therapy department. Furthermore using MC simulations and DTOs, i.e. a totally numerical QC tools, will also simplify QC implementation, and enable process automation; this allows radiotherapy centers to have a more complete and thorough QC. The program DTO-DC was established primarily on ELEKTA accelerator (photons mode) using non-anatomical DTOs [2]. Today our aim is to complete and apply this program on VARIAN accelerator (photons and electrons mode) using anatomical DTOs. First, we developed, modeled and created three anatomical DTOs in DICOM format: 'Head and Neck', Thorax and Pelvis. We parallelized the PENELOPE code using MPI libraries to accelerate their calculation, we have modeled in PENELOPE geometry Clinac head of Varian Clinac 2100CD (photons mode). Then, to implement this method, we calculated the dose distributions in Pelvis DTO using PENELOPE and ECLIPSE TPS. Finally we compared simulated and calculated dose distributions employing the relative difference proposed by Venselaar [3]. The results of this work demonstrate the feasibility of this method that provides a more accurate and easily achievable QC. Nonetheless, this method, implemented on ECLIPSE TPS version 8.6.15, has revealed large discrepancies (11%) between Monte Carlo simulations and the AAA algorithm calculations especially in equivalent air and equivalent bone areas. Our work will be completed by dose measurement (with film) in the presence of heterogeneous environment to validate MC simulations.

  14. SAF values for internal photon emitters calculated for the RPI-P pregnant-female models using Monte Carlo methods

    SciTech Connect

    Shi, C. Y.; Xu, X. George; Stabin, Michael G.

    2008-07-15

    Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.

  15. Novel phase-space Monte-Carlo method for quench dynamics in 1D and 2D spin models

    NASA Astrophysics Data System (ADS)

    Pikovski, Alexander; Schachenmayer, Johannes; Rey, Ana Maria

    2015-05-01

    An important outstanding problem is the effcient numerical computation of quench dynamics in large spin systems. We propose a semiclassical method to study many-body spin dynamics in generic spin lattice models. The method, named DTWA, is based on a novel type of discrete Monte-Carlo sampling in phase-space. We demonstare the power of the technique by comparisons with analytical and numerically exact calculations. It is shown that DTWA captures the dynamics of one- and two-point correlations 1D systems. We also use DTWA to study the dynamics of correlations in 2D systems with many spins and different types of long-range couplings, in regimes where other numerical methods are generally unreliable. Computing spatial and time-dependent correlations, we find a sharp change in the speed of propagation of correlations at a critical range of interactions determined by the system dimension. The investigations are relevant for a broad range of systems including solids, atom-photon systems and ultracold gases of polar molecules, trapped ions, Rydberg, and magnetic atoms. This work has been financially supported by JILA-NSF-PFC-1125844, NSF-PIF-1211914, ARO, AFOSR, AFOSR-MURI.

  16. Monte Carlo Simulation of Alloy Design Techniques: Fracture and Welding Studied Using the BFS Method for Alloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo H.; Good, Brian; Noebe, Ronald D.; Honecy, Frank; Abel, Phillip

    1999-01-01

    Large-scale simulations of dynamic processes at the atomic level have developed into one of the main areas of work in computational materials science. Until recently, severe computational restrictions, as well as the lack of accurate methods for calculating the energetics, resulted in slower growth in the area than that required by current alloy design programs. The Computational Materials Group at the NASA Lewis Research Center is devoted to the development of powerful, accurate, economical tools to aid in alloy design. These include the BFS (Bozzolo, Ferrante, and Smith) method for alloys (ref. 1) and the development of dedicated software for large-scale simulations based on Monte Carlo- Metropolis numerical techniques, as well as state-of-the-art visualization methods. Our previous effort linking theoretical and computational modeling resulted in the successful prediction of the microstructure of a five-element intermetallic alloy, in excellent agreement with experimental results (refs. 2 and 3). This effort also produced a complete description of the role of alloying additions in intermetallic binary, ternary, and higher order alloys (ref. 4).

  17. Assessment of quality control parameters for an X-ray tube using the Monte Carlo method and unfolding techniques.

    PubMed

    Gallardo, S; Rdenas, J; Verd, G; Querol, A

    2009-01-01

    Quality Control (QC) parameters for an X-ray tube such as Half Value Layer (HVL), homogeneity factor and mean photon energy, can be obtained from the primary beam spectrum. A direct Monte Carlo (MC) simulation has been used to obtain this spectrum. Indirect spectrometry procedures such as Compton scattering have been also experimentally utilized since direct spectrometry causes a pile-up effect in detectors. As well the Compton spectrometry has been simulated with the MC method. In both cases unfolding techniques shall be applied to obtain the primary spectrum. Two unfolding methods (TSVD and Spectro-X) have been analyzed. Results are compared each other and with reference values taken from IPEM Report 78 catalogue. Direct MC simulation is a good approximation to obtain the primary spectrum and hence the QC parameters. TSVD is a better unfolding method for the scattered spectrum than the Spectro-X code. An improvement of the methodology to obtain QC parameters is important in Biomedical Engineering (BME) applications due to the wide use of X-ray tubes. PMID:19964756

  18. A New Monte Carlo Filtering Method for the Diagnosis of Mission-Critical Failures

    NASA Technical Reports Server (NTRS)

    Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen

    2009-01-01

    Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the settings most likely to cause a mission-critical failure. This research benchmarks two treatment learning methods against standard optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. It is shown that these treatment learners are both faster than traditional methods and show demonstrably better results.

  19. Probabilistic landslide run-out assessment with a 2-D dynamic numerical model using a Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Cepeda, Jose; Luna, Byron Quan; Nadim, Farrokh

    2013-04-01

    An essential component of a quantitative landslide hazard assessment is establishing the extent of the endangered area. This task requires accurate prediction of the run-out behaviour of a landslide, which includes the estimation of the run-out distance, run-out width, velocities, pressures, and depth of the moving mass and the final configuration of the deposits. One approach to run-out modelling is to reproduce accurately the dynamics of the propagation processes. A number of dynamic numerical models are able to compute the movement of the flow over irregular topographic terrains (3-D) controlled by a complex interaction between mechanical properties that may vary in space and time. Given the number of unknown parameters and the fact that most of the rheological parameters cannot be measured in the laboratory or field, the parametrization of run-out models is very difficult in practice. For this reason, the application of run-out models is mostly used for back-analysis of past events and very few studies have attempted to achieve forward predictions. Consequently all models are based on simplified descriptions that attempt to reproduce the general features of the failed mass motion through the use of parameters (mostly controlling shear stresses at the base of the moving mass) which account for aspects not explicitly described or oversimplified. The uncertainties involved in the run-out process have to be approached in a stochastic manner. It is of significant importance to develop methods for quantifying and properly handling the uncertainties in dynamic run-out models, in order to allow a more comprehensive approach to quantitative risk assessment. A method was developed to compute the variation in run-out intensities by using a dynamic run-out model (MassMov2D) and a probabilistic framework based on a Monte Carlo simulation in order to analyze the effect of the uncertainty of input parameters. The probability density functions of the rheological parameters were generated and sampled leading to a large number of run-out scenarios. In the application of the Monte Carlo method, random samples were generated from the input probability distributions that fitted a Gaussian copula distribution. Each set of samples was used as input to model simulation and the resulting outcome was a spatially displayed intensity map. These maps were created with the results of the probability density functions at each point of the flow track and the deposition zone, having as an output a confidence probability map for the various intensity measures. The goal of this methodology is that the results (in terms of intensity characteristics) can be linked directly to vulnerability curves associated to the elements at risk.

  20. High-density dental implants and radiotherapy planning: evaluation of effects on dose distribution using pencil beam convolution algorithm and Monte Carlo method.

    PubMed

    Çatli, Serap

    2015-01-01

    High atomic number and density of dental implants leads to major problems at providing an accurate dose distribution in radiotherapy and contouring tumors and organs caused by the artifact in head and neck tumors. The limits and deficiencies of the algorithms using in the treatment planning systems can lead to large errors in dose calculation, and this may adversely affect the patient's treatment. In the present study, four commercial dental implants were used: pure titanium, titanium alloy (Ti-6Al-4V), amalgam, and crown. The effects of dental implants on dose distribution are determined with two methods: pencil beam convolution (PBC) algorithm and Monte Carlo code for 6 MV photon beam. The central axis depth doses were calculated on the phantom for a source-skin distance (SSD) of 100 cm and a 10 × 10 cm2 field using both of algorithms. The results of Monte Carlo method and Eclipse TPS were compared to each other and to those previously reported. In the present study, dose increases in tissue at a distance of 2 mm in front of the dental implants were seen due to the backscatter of electrons for dental implants at 6 MV using the Monte Carlo method. The Eclipse treatment planning system (TPS) couldn't precisely account for the backscatter radiation caused by the dental prostheses. TPS underestimated the back scatter dose and overestimated the dose after the dental implants. The large errors found for TPS in this study are due to the limits and deficiencies of the algorithms. The accuracy of the PBC algorithm of Eclipse TPS was evaluated in comparison to Monte Carlo calculations in consideration of the recommendations of the American Association of Physicists in Medicine Radiation Therapy Committee Task Group 65. From the comparisons of the TPS and Monte Carlo calculations, it is verified that the Monte Carlo simulation is a good approach to derive the dose distribution in heterogeneous media. PMID:26699323

  1. 3D Continuum Radiative Transfer. An adaptive grid construction algorithm based on the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Niccolini, G.; Alcolea, J.

    Solving the radiative transfer problem is a common problematic to may fields in astrophysics. With the increasing angular resolution of spatial or ground-based telescopes (VLTI, HST) but also with the next decade instruments (NGST, ALMA, ...), astrophysical objects reveal and will certainly reveal complex spatial structures. Consequently, it is necessary to develop numerical tools being able to solve the radiative transfer equation in three dimensions in order to model and interpret these observations. I present a 3D radiative transfer program, using a new method for the construction of an adaptive spatial grid, based on the Monte Claro method. With the help of this tools, one can solve the continuum radiative transfer problem (e.g. a dusty medium), computes the temperature structure of the considered medium and obtain the flux of the object (SED and images).

  2. Editor's Note to ``Proof of Validity of Monte Carlo Method for Canonical Averaging'' by Marshall Rosenbluth

    NASA Astrophysics Data System (ADS)

    Gubernatis, J. E.

    2003-11-01

    In a previous article [J. Phys. Chem. 21: 1087 (1953)] a prescription was given for moving from point to point in the configuration space of a system in such a way that averaging over many moves is equivalent to a canonical averaging over configuration space. The prescription is suitable for electronic machine calculations and provides the basis for calculations described elsewhere. The purpose of this paper is to provide a more rigorous proof of the method.

  3. Monte Carlo-based diffusion tensor tractography with a geometrically corrected voxel-centre connecting method

    NASA Astrophysics Data System (ADS)

    Bodammer, N. C.; Kaufmann, J.; Kanowski, M.; Tempelmann, C.

    2009-02-01

    Diffusion tensor tractography (DTT) allows one to explore axonal connectivity patterns in neuronal tissue by linking local predominant diffusion directions determined by diffusion tensor imaging (DTI). The majority of existing tractography approaches use continuous coordinates for calculating single trajectories through the diffusion tensor field. The tractography algorithm we propose is characterized by (1) a trajectory propagation rule that uses voxel centres as vertices and (2) orientation probabilities for the calculated steps in a trajectory that are obtained from the diffusion tensors of either two or three voxels. These voxels include the last voxel of each previous step and one or two candidate successor voxels. The precision and the accuracy of the suggested method are explored with synthetic data. Results clearly favour probabilities based on two consecutive successor voxels. Evidence is also provided that in any voxel-centre-based tractography approach, there is a need for a probability correction that takes into account the geometry of the acquisition grid. Finally, we provide examples in which the proposed fibre-tracking method is applied to the human optical radiation, the cortico-spinal tracts and to connections between Broca's and Wernicke's area to demonstrate the performance of the proposed method on measured data.

  4. Comparing Two Different Methods of Preferential Flow Simulation, Using Calibration Constrained Monte Carlo Uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Schirmer, M.; Ghasemizade, M.; Radny, D.

    2014-12-01

    Many different methods and approaches have been suggested for simulation of preferential flows. However, most of these methods have been tested in lab scales where boundary conditions and material properties are known and under control. The focus of this study is to compare two different approaches for simulating preferential flows in a weighing lysimeter where the scale of simulation is closer to field scales than simulations done in labs. To do so, we applied dual permeability and spatially distributed heterogeneity as two competitive approaches for simulating slow and rapid flow out of a lysimeter. While the dual permeability approach assumes that there is a structure among soil aggregates and that can be captured as a fraction of the porosity, the other method attributes the existence of preferential flows to heterogeneity distributed within the domain. The two aforementioned approaches were used in order to simulate daily recharge values of a lysimeter. The analysis included a calibration phase, which started from March 2012 until March 2013, and a validation phase which lasted a year following the calibration period. The simulations were performed based on the numerical and 3-D physically based model HydroGeoSphere. The nonlinear uncertainty analysis of the results indicate that they are comparable.

  5. Pre-processing method to improve optical parameters estimation in Monte Carlo-based inverse problem solving

    NASA Astrophysics Data System (ADS)

    Kholodtsova, Maria N.; Loschenov, Victor B.; Daul, Christian; Blondel, Walter

    2014-05-01

    Determining the optical properties of biological tissues in vivo from spectral intensity measurements performed at their surface is still a challenge. Based on spectroscopic data acquired, the aim is to solve an inverse problem, where the optical parameter values of a forward model are to be estimated through optimization procedure of some cost function. In many cases it is an ill-posed problem because of small numbers of measures, errors on experimental data, nature of a forward model output data, which may be affected by statistical noise in the case of Monte Carlo (MC) simulation or approximated values for short inter-fibre distances (for Diffusion Equation Approximation (DEA)). In case of optical biopsy, spatially resolved diffuse reflectance spectroscopy is one simple technique that uses various excitation-toemission fibre distances to probe tissue in depths. The aim of the present contribution is to study the characteristics of some classically used cost function, optimization methods (Levenberg-Marquardt algorithm) and how it is reaching global minimum when using MC and/or DEA approaches. Several methods of smoothing filters and fitting were tested on the reflectance curves, I(r), gathered from MC simulations. It was obtained that smoothing the initial data with local regression weighted second degree polynomial and then fitting the data with double exponential decay function decreases the probability of the inverse algorithm to converge to local minima close to the initial point of first guess.

  6. Accurate determination of the Gibbs energy of Cu-Zr melts using the thermodynamic integration method in Monte Carlo simulations.

    PubMed

    Harvey, J-P; Gheribi, A E; Chartrand, P

    2011-08-28

    The design of multicomponent alloys used in different applications based on specific thermo-physical properties determined experimentally or predicted from theoretical calculations is of major importance in many engineering applications. A procedure based on Monte Carlo simulations (MCS) and the thermodynamic integration (TI) method to improve the quality of the predicted thermodynamic properties calculated from classical thermodynamic calculations is presented in this study. The Gibbs energy function of the liquid phase of the Cu-Zr system at 1800 K has been determined based on this approach. The internal structure of Cu-Zr melts and amorphous alloys at different temperatures, as well as other physical properties were also obtained from MCS in which the phase trajectory was modeled by the modified embedded atom model formalism. A rigorous comparison between available experimental data and simulated thermo-physical properties obtained from our MCS is presented in this work. The modified quasichemical model in the pair approximation was parameterized using the internal structure data obtained from our MCS and the precise Gibbs energy function calculated at 1800 K from the TI method. The predicted activity of copper in Cu-Zr melts at 1499 K obtained from our thermodynamic optimization was corroborated by experimental data found in the literature. The validity of the amplitude of the entropy of mixing obtained from the in silico procedure presented in this work was analyzed based on the thermodynamic description of hard sphere mixtures. PMID:21895194

  7. Reliability of the compensation comparison method for measuring retinal stray light studied using Monte-Carlo simulations.

    PubMed

    Coppens, Joris E; Franssen, Luuk; van den Berg, Thomas J T P

    2006-01-01

    Recently the psychophysical compensation comparison method was developed for routine measurement of retinal stray light. The subject's responses to a series of two-alternative-forced-choice trials are analyzed using a maximum-likelihood (ML) approach assuming some fixed shape for the psychometric function (PF). This study evaluates the reliability of the method using Monte-Carlo simulations. Various sampling strategies were investigated, including the two-phase sampling strategy that is used in a commercially available instrument. Results are given for the effective dynamic range and measurement accuracy. The effect of a mismatch of the shape of the PF of an observer and the fixed shape used in the ML analysis was analyzed. Main outcomes are that the two-phase sampling scheme gives good precision (Standard deviation = 0.07 logarithmic units on average) for estimation of the stray light value. Bias is virtually zero. Furthermore, a reliability index was derived from the responses and found to be effective. PMID:17092159

  8. Mesh-based Monte Carlo method for fibre-optic optogenetic neural stimulation with direct photon flux recording strategy.

    PubMed

    Shin, Younghoon; Kwon, Hyuk-Sang

    2016-03-21

    We propose a Monte Carlo (MC) method based on a direct photon flux recording strategy using inhomogeneous, meshed rodent brain atlas. This MC method was inspired by and dedicated to fibre-optics-based optogenetic neural stimulations, thus providing an accurate and direct solution for light intensity distributions in brain regions with different optical properties. Our model was used to estimate the 3D light intensity attenuation for close proximity between an implanted optical fibre source and neural target area for typical optogenetics applications. Interestingly, there are discrepancies with studies using a diffusion-based light intensity prediction model, perhaps due to use of improper light scattering models developed for far-field problems. Our solution was validated by comparison with the gold-standard MC model, and it enabled accurate calculations of internal intensity distributions in an inhomogeneous near light source domain. Thus our strategy can be applied to studying how illuminated light spreads through an inhomogeneous brain area, or for determining the amount of light required for optogenetic manipulation of a specific neural target area. PMID:26914289

  9. Assessment of physician and patient (child and adult) equivalent doses during renal angiography by Monte Carlo method.

    PubMed

    Karimian, A; Nikparvar, B; Jabbari, I

    2014-11-01

    Renal angiography is one of the medical imaging methods in which patient and physician receive high equivalent doses due to long duration of fluoroscopy. In this research, equivalent doses of some radiosensitive tissues of patient (adult and child) and physician during renal angiography have been calculated by using adult and child Oak Ridge National Laboratory phantoms and Monte Carlo method (MCNPX). The results showed, in angiography of right kidney in a child and adult patient, that gall bladder with the amounts of 2.32 and 0.35 mSv, respectively, has received the most equivalent dose. About the physician, left hand, left eye and thymus absorbed the most amounts of doses, means 0.020 mSv. In addition, equivalent doses of the physician's lens eye, thyroid and knees were 0.023, 0.007 and 7.9E-4 mSv, respectively. Although these values are less than the reported thresholds by ICRP 103, it should be noted that these amounts are related to one examination. PMID:25063788

  10. Accelerated kinetics of amorphous silicon using an on-the-fly off-lattice kinetic Monte-Carlo method

    NASA Astrophysics Data System (ADS)

    Joly, Jean-Francois; El-Mellouhi, Fedwa; Beland, Laurent Karim; Mousseau, Normand

    2011-03-01

    The time evolution of a series of well relaxed amorphous silicon models was simulated using the kinetic Activation-RelaxationTechnique (kART), an on-the-fly off-lattice kinetic Monte Carlo method. This novel algorithm uses the ART nouveau algorithm to generate activated events and links them with local topologies. It was shown to work well for crystals with few defects but this is the first time it is used to study an amorphous material. A parallel implementation allows us to increase the speed of the event generation phase. After each KMC step, new searches are initiated for each new topology encountered. Well relaxed amorphous silicon models of 1000 atoms described by a modified version of the empirical Stillinger-Weber potential were used as a starting point for the simulations. Initial results show that the method is faster by orders of magnitude compared to conventional MD simulations up to temperatures of 500 K. Vacancy-type defects were also introduced in this system and their stability and lifetimes are calculated.

  11. Comparison of Residence Time Estimation Methods for Radioimmunotherapy Dosimetry and Treatment Planning—Monte Carlo Simulation Studies

    PubMed Central

    He, Bin; Wahl, Richard L.; Du, Yong; Sgouros, George; Jacene, Heather; Flinn, Ian; Frey, Eric C.

    2008-01-01

    Estimating the residence times in tumor and normal organs is an essential part of treatment planning for radioimmunotherapy (RIT). This estimation is usually done using a conjugate view whole body scan time series and planar processing. This method has logistical and cost advantages compared to 3-D imaging methods such as Single photon emission computed tomography (SPECT), but, because it does not provide information about the 3-D distribution of activity, it is difficult to fully compensate for effects such as attenuation and background and overlapping activity. Incomplete compensation for these effects reduces the accuracy of the residence time estimates. In this work we compare residence times estimates obtained using planar methods to those from methods based on quantitative SPECT (QSPECT) reconstructions. We have previously developed QSPECT methods that provide compensation for attenuation, scatter, collimator-detector response, and partial volume effects. In this study we compared the use of residence time estimation methods using QSPECT to planar methods. The evaluation was done using the realistic NCAT phantom with organ time activities that model 111In ibritumomab tiuxetan. Projection data were obtained using Monte Carlo simulations (MCS) that realistically model the image formation process including penetration and scatter in the collimator-detector system. These projection data were used to evaluate the accuracy of residence time estimation using a time series of QSPECT studies, a single QSPECT study combined with planar scans and the planar scans alone. The errors in the residence time estimates were <3.8%, <15%, and 2%–107% for the QSPECT, hybrid planar/QSPECT, and planar methods, respectively. The quantitative accuracy was worst for pure planar processing and best for pure QSPECT processing. Hybrid planar/QSPECT methods, where a single QSPECT study was combined with a series of planar scans, provided a large and statistically significant improvement in quantitative accuracy for most organs compared to the planar scans alone, even without sophisticated attention to background subtraction or thickness corrections in planar processing. These results indicate that hybrid planar/QSPECT methods are generally superior to pure planar methods and may be an acceptable alternative to performing a time series of QSPECT studies. PMID:18390348

  12. Group membership prediction when known groups consist of unknown subgroups: a Monte Carlo comparison of methods.

    PubMed

    Finch, W Holmes; Bolin, Jocelyn H; Kelley, Ken

    2014-01-01

    Classification using standard statistical methods such as linear discriminant analysis (LDA) or logistic regression (LR) presume knowledge of group membership prior to the development of an algorithm for prediction. However, in many real world applications members of the same nominal group, might in fact come from different subpopulations on the underlying construct. For example, individuals diagnosed with depression will not all have the same levels of this disorder, though for the purposes of LDA or LR they will be treated in the same manner. The goal of this simulation study was to examine the performance of several methods for group classification in the case where within group membership was not homogeneous. For example, suppose there are 3 known groups but within each group two unknown classes. Several approaches were compared, including LDA, LR, classification and regression trees (CART), generalized additive models (GAM), and mixture discriminant analysis (MIXDA). Results of the study indicated that CART and mixture discriminant analysis were the most effective tools for situations in which known groups were not homogeneous, whereas LDA, LR, and GAM had the highest rates of misclassification. Implications of these results for theory and practice are discussed. PMID:24904445

  13. Monte Carlo Simulation Methods for Computing Liquid-Vapor Saturation Properties of Model Systems.

    PubMed

    Rane, Kaustubh S; Murali, Sabharish; Errington, Jeffrey R

    2013-06-11

    We discuss molecular simulation methods for computing the phase coexistence properties of complex molecules. The strategies that we pursue are histogram-based approaches in which thermodynamic properties are related to relevant probability distributions. We first outline grand canonical and isothermal-isobaric methods for directly locating a saturation point at a given temperature. In the former case, we show how reservoir and growth expanded ensemble techniques can be used to facilitate the creation and insertion of complex molecules within a grand canonical simulation. We next focus on grand canonical and isothermal-isobaric temperature expanded ensemble techniques that provide a means to trace saturation lines over a wide range of temperatures. To demonstrate the utility of the strategies introduced here, we present phase coexistence data for a series of molecules, including n-octane, cyclohexane, water, 1-propanol, squalane, and pyrene. Overall, we find the direct grand canonical approach to be the most effective means to directly locate a coexistence point at a given temperature and the isothermal-isobaric temperature expanded ensemble scheme to provide the most effective means to follow a saturation curve to low temperature. PMID:26583852

  14. Group membership prediction when known groups consist of unknown subgroups: a Monte Carlo comparison of methods

    PubMed Central

    Finch, W. Holmes; Bolin, Jocelyn H.; Kelley, Ken

    2014-01-01

    Classification using standard statistical methods such as linear discriminant analysis (LDA) or logistic regression (LR) presume knowledge of group membership prior to the development of an algorithm for prediction. However, in many real world applications members of the same nominal group, might in fact come from different subpopulations on the underlying construct. For example, individuals diagnosed with depression will not all have the same levels of this disorder, though for the purposes of LDA or LR they will be treated in the same manner. The goal of this simulation study was to examine the performance of several methods for group classification in the case where within group membership was not homogeneous. For example, suppose there are 3 known groups but within each group two unknown classes. Several approaches were compared, including LDA, LR, classification and regression trees (CART), generalized additive models (GAM), and mixture discriminant analysis (MIXDA). Results of the study indicated that CART and mixture discriminant analysis were the most effective tools for situations in which known groups were not homogeneous, whereas LDA, LR, and GAM had the highest rates of misclassification. Implications of these results for theory and practice are discussed. PMID:24904445

  15. A practical cone-beam CT scatter correction method with optimized Monte Carlo simulations for image-guided radiation therapy.

    PubMed

    Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B; Jia, Xun

    2015-05-01

    Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3?HU and from 78 to 9?HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30?s including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299

  16. Monte Carlo Methods to Establish Confidence in Planets Discovered by Transit Photometry

    NASA Astrophysics Data System (ADS)

    Jenkins, J. M.; Caldwell, D. A.

    2000-12-01

    With the astonishing discovery of about a dozen super giant short-period (<7 days) planets in the last five years, astronomers are turning to transit photometry to discover new planets and to confirm radial velocity detections. Indeed, ground-based transit photometry provided the first direct confirmation of a planetary detection, HD209458b. Transits of HD209458b were also detected in the Hipparcos photometry archive collected several years earlier. Several space-borne photometers have been proposed to detect extrasolar planets. The focus of NASA Ames' Kepler Mission is the detection of Earth-size planets. In this paper we focus attention on the problem of assessing the significance of a candidate transit signature. There are two fundamental quantities of interest required to establish the confidence in a planetary candidate. These are: 1) the equivalent number of independent statistical tests conducted in searching the light curve of one star for transiting planets over a given range of periods, and 2) the characteristics of the observation noise for the light curve in question. The latter quantity determines the false alarm rate for a single test for that particular star as a function of the detection threshold. The former quantity, together with the total number of target stars in the observing program, dictate the requisite single-test false alarm rate based on the acceptable total number of false alarms. The methods described do not make any presumptions about the distribution of the observational noise. In addition they either provide conservative results for non-white noise or take the correlation structure of the noise into account. The results of this paper show that transit photometry is a promising method for detecting planets even in the presence of colored, non-Gaussian noise and with the required large number of target stars (>100,000 stars in the case of the Kepler Mission) for the small geometric probability of transit alignment. Support for this work was received from NASA's Discovery Program.

  17. Parameter Estimation Using Markov Chain Monte Carlo Methods for Gravitational Waves from Spinning Inspirals of Compact Objects

    NASA Astrophysics Data System (ADS)

    Raymond, Vivien

    2012-05-01

    Gravitational waves are on the verge of opening a brand new window on the Universe. However, gravitational wave astronomy comes with very unique challenges in data analysis and signal processing in order to lead to new discoveries in astrophysics. Among the sources of gravitational waves, inspiraling binary systems of compact objects, neutron stars and/or black holes in the mass range 1Msun--100Msun stand out as likely to be detected and relatively easy to model. The detection of a gravitational wave event is challenging and will be a rewarding achievement by itself. After such a detection, measurement of source properties holds major promise for improving our astrophysical understanding and requires reliable methods for parameter estimation and model selection. This is a complicated problem, because of the large number of parameters (15 for spinning compact objects in a quasi-circular orbit) and the degeneracies between them, the significant amount of structure in the parameter space, and the particularities of the detector noise. This work presents the development of a parameter-estimation and model-selection algorithm, based on Bayesian statistical theory and using Markov chain Monte Carlo methods for ground-based gravitational-wave detectors (LIGO and Virgo). This method started from existing non-spinning and single spin stand-alone analysis codes and was developed into a method able to tackle the complexity of fully spinning systems, and infer all spinning parameters of a compact binary. Not only are spinning parameters believed to be astrophysically significant, but this work has shown that not including them in the analysis can lead to biases in parameter recovery. This work made it possible to answer several scientific questions involving parameter estimation of inspiraling spinning compact objects, which are addressed in the chapters of this dissertation.

  18. A novel method for online health prognosis of equipment based on hidden semi-Markov model using sequential Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Liu, Qinming; Dong, Ming; Peng, Ying

    2012-10-01

    Health prognosis of equipment is considered as a key process of the condition based maintenance strategy. It contributes to reduce the related risks and the maintenance costs of equipment and improve the availability, the reliability and the security of equipment. However, equipment often operates under dynamically operational and environmental conditions, and its lifetime is generally described by the monitored nonlinear time-series data. Equipment subjects to high levels of uncertainty and unpredictability so that effective methods for its online health prognosis are still in need now. This paper addresses prognostic methods based on hidden semi-Markov model (HSMM) by using sequential Monte Carlo (SMC) method. HSMM is applied to obtain the transition probabilities among health states and the state durations. The SMC method is adopted to describe the probability relationships between health states and the monitored observations of equipment. This paper proposes a novel multi-step-ahead health recognition algorithm based on joint probability distribution to recognize the health states of equipment and its health state change point. A new online health prognostic method is also developed to estimate the residual useful lifetime (RUL) values of equipment. At the end of the paper, a real case study is used to demonstrate the performance and potential applications of the proposed methods for online health prognosis of equipment.

  19. Calculating dosimetry parameters in brachytherapy using the continuous beta spectrum of Sm-153 in the Monte Carlo simulation approach

    NASA Astrophysics Data System (ADS)

    Shahrabi, Mohammad; Tavakoli-Anbaran, Hossien

    2015-02-01

    Calculation of dosimetry parameters by TG-60 approach for beta sources and TG-43 approach for gamma sources can help to design brachytherapy sources. In this work, TG-60 dosimetry parameters are calculated for the Sm-153 brachytherapy seed using the Monte Carlo simulation approach. The continuous beta spectrum of Sm-153 and probability density are applied to simulate the Sm-153 source. Sm-153 is produced by neutron capture during the 152Sm( n,)153Sm reaction in reactors. The Sm-153 radionuclide decays by beta rays followed by gamma-ray emissions with half-life of 1.928 days. Sm-153 source is simulated in a spherical water phantom to calculate the deposited energy and geometry function in the intended points. The Sm-153 seed consists of 20% samarium, 30% calcium and 50% silicon, in cylindrical shape with density 1.76gr/cm^3. The anisotropy function and radial dose function were calculated at 0-4mm radial distances relative to the seed center and polar angles of 0-90 degrees. The results of this research are compared with the results of Taghdiri et al. (Iran. J. Radiat. Res. 9, 103 (2011)). The final beta spectrum of Sm-153 is not considered in their work. Results show significant relative differences even up to 5 times for anisotropy functions at 0.6, 1 and 2mm distances and some angles. MCNP4C Monte Carlo code is applied in both in the present paper and in the above-mentioned one.

  20. Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes

    SciTech Connect

    Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.

    2002-09-11

    The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.

  1. Percolation of the site random-cluster model by Monte Carlo method.

    PubMed

    Wang, Songsong; Zhang, Wanzhou; Ding, Chengxiang

    2015-08-01

    We propose a site random-cluster model by introducing an additional cluster weight in the partition function of the traditional site percolation. To simulate the model on a square lattice, we combine the color-assignation and the Swendsen-Wang methods to design a highly efficient cluster algorithm with a small critical slowing-down phenomenon. To verify whether or not it is consistent with the bond random-cluster model, we measure several quantities, such as the wrapping probability Re, the percolating cluster density P?, and the magnetic susceptibility per site ?p, as well as two exponents, such as the thermal exponent yt and the fractal dimension yh of the percolating cluster. We find that for different exponents of cluster weight q=1.5, 2, 2.5, 3, 3.5, and 4, the numerical estimation of the exponents yt and yh are consistent with the theoretical values. The universalities of the site random-cluster model and the bond random-cluster model are completely identical. For larger values of q, we find obvious signatures of the first-order percolation transition by the histograms and the hysteresis loops of percolating cluster density and the energy per site. Our results are helpful for the understanding of the percolation of traditional statistical models. PMID:26382364

  2. Evaluation of inter- and intramolecular primary structure homologies of interferons by a Monte Carlo method.

    PubMed

    Wagner, F; Hart, R; Fink, R; Classen, M

    1990-02-01

    Using Sellers TT algorithm, primary structure repeats have been described for interferon (IFN)-alpha, -beta 1, and gamma. To reevaluate these results and to extend them to IFN-beta 2 (interleukin-6), a modified algorithm was developed that uses a metric to define the "best" partial homology of two peptide sequences and to compare it to those detected in random permutations of the peptide. Using this approach, the known structural homologies of IFN-alpha with IFN-beta 1 and of human (Hu) IFN-gamma with murine (Mu) IFN-gamma were identified correctly. However, the primary structure repeats in the amino acid sequences of IFN-alpha, -beta 1, and -gamma turned out to be no better than those detectable in random permutations of these sequences. These results were confirmed using a different, nonlinear metric. A previously used approach to demonstrate significance was shown to produce false-positive results. No significant primary structure homologies were detected among IFN-beta 1, -beta 2, and -gamma. In contrast to the amino acid sequence analysis, the DNA sequence of HuIFN-beta 1 contained a significant repeat that had no significant counterpart in MuIFN-beta or in IFN-alpha. In conclusion, some previously reported results obtained with Sellers TT algorithm on amino acid sequences are easily explained as random similarities, and it is therefore strongly recommended that a method like ours should be used to control significance. PMID:1691767

  3. Percolation of the site random-cluster model by Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Songsong; Zhang, Wanzhou; Ding, Chengxiang

    2015-08-01

    We propose a site random-cluster model by introducing an additional cluster weight in the partition function of the traditional site percolation. To simulate the model on a square lattice, we combine the color-assignation and the Swendsen-Wang methods to design a highly efficient cluster algorithm with a small critical slowing-down phenomenon. To verify whether or not it is consistent with the bond random-cluster model, we measure several quantities, such as the wrapping probability Re, the percolating cluster density P∞, and the magnetic susceptibility per site χp, as well as two exponents, such as the thermal exponent yt and the fractal dimension yh of the percolating cluster. We find that for different exponents of cluster weight q =1.5 , 2, 2.5 , 3, 3.5 , and 4, the numerical estimation of the exponents yt and yh are consistent with the theoretical values. The universalities of the site random-cluster model and the bond random-cluster model are completely identical. For larger values of q , we find obvious signatures of the first-order percolation transition by the histograms and the hysteresis loops of percolating cluster density and the energy per site. Our results are helpful for the understanding of the percolation of traditional statistical models.

  4. Assessment of high-fidelity collision models in the direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Weaver, Andrew B.

    Advances in computer technology over the decades has allowed for more complex physics to be modeled in the DSMC method. Beginning with the first paper on DSMC in 1963, 30,000 collision events per hour were simulated using a simple hard sphere model. Today, more than 10 billion collision events can be simulated per hour for the same problem. Many new and more physically realistic collision models such as the Lennard-Jones potential and the forced harmonic oscillator model have been introduced into DSMC. However, the fact that computer resources are more readily available and higher-fidelity models have been developed does not necessitate their usage. It is important to understand how such high-fidelity models affect the output quantities of interest in engineering applications. The effect of elastic and inelastic collision models on compressible Couette flow, ground-state atomic oxygen transport properties, and normal shock waves have therefore been investigated. Recommendations for variable soft sphere and Lennard-Jones model parameters are made based on a critical review of recent ab-initio calculations and experimental measurements of transport properties.

  5. Study on method to simulate light propagation on tissue with characteristics of radial-beam LED based on Monte-Carlo method.

    PubMed

    Song, Sangha; Elgezua, Inko; Kobayashi, Yo; Fujie, Masakatsu G

    2013-01-01

    In biomedical, Monte-carlo simulation is commonly used for simulation of light diffusion in tissue. But, most of previous studies did not consider a radial beam LED as light source. Therefore, we considered characteristics of a radial beam LED and applied them on MC simulation as light source. In this paper, we consider 3 characteristics of radial beam LED. The first is an initial launch area of photons. The second is an incident angle of a photon at an initial photon launching area. The third is the refraction effect according to contact area between LED and a turbid medium. For the verification of the MC simulation, we compared simulation and experimental results. The average of the correlation coefficient between simulation and experimental results is 0.9954. Through this study, we show an effective method to simulate light diffusion on tissue with characteristics for radial beam LED based on MC simulation. PMID:24109615

  6. MO-E-18C-02: Hands-On Monte Carlo Project Assignment as a Method to Teach Radiation Physics

    SciTech Connect

    Pater, P; Vallieres, M; Seuntjens, J

    2014-06-15

    Purpose: To present a hands-on project on Monte Carlo methods (MC) recently added to the curriculum and to discuss the students' appreciation. Methods: Since 2012, a 1.5 hour lecture dedicated to MC fundamentals follows the detailed presentation of photon and electron interactions. Students also program all sampling steps (interaction length and type, scattering angle, energy deposit) of a MC photon transport code. A handout structured in a step-by-step fashion guides student in conducting consistency checks. For extra points, students can code a fully working MC simulation, that simulates a dose distribution for 50 keV photons. A kerma approximation to dose deposition is assumed. A survey was conducted to which 10 out of the 14 attending students responded. It compared MC knowledge prior to and after the project, questioned the usefulness of radiation physics teaching through MC and surveyed possible project improvements. Results: According to the survey, 76% of students had no or a basic knowledge of MC methods before the class and 65% estimate to have a good to very good understanding of MC methods after attending the class. 80% of students feel that the MC project helped them significantly to understand simulations of dose distributions. On average, students dedicated 12.5 hours to the project and appreciated the balance between hand-holding and questions/implications. Conclusion: A lecture on MC methods with a hands-on MC programming project requiring about 14 hours was added to the graduate study curriculum since 2012. MC methods produce “gold standard” dose distributions and slowly enter routine clinical work and a fundamental understanding of MC methods should be a requirement for future students. Overall, the lecture and project helped students relate crosssections to dose depositions and presented numerical sampling methods behind the simulation of these dose distributions. Research funding from governments of Canada and Quebec. PP acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290)

  7. Clinical implementation of a GPU-based simplified Monte Carlo method for a treatment planning system of proton beam therapy

    NASA Astrophysics Data System (ADS)

    Kohno, R.; Hotta, K.; Nishioka, S.; Matsubara, K.; Tansho, R.; Suzuki, T.

    2011-11-01

    We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.

  8. Simulation of germanium detector calibration using the Monte Carlo method: comparison between point and surface source models.

    PubMed

    Ródenas, J; Burgos, M C; Zarza, I; Gallardo, S

    2005-01-01

    Simulation of detector calibration using the Monte Carlo method is very convenient. The computational calibration procedure using the MCNP code was validated by comparing results of the simulation with laboratory measurements. The standard source used for this validation was a disc-shaped filter where fission and activation products were deposited. Some discrepancies between the MCNP results and laboratory measurements were attributed to the point source model adopted. In this paper, the standard source has been simulated using both point and surface source models. Results from both models are compared with each other as well as with experimental measurements. Two variables, namely, the collimator diameter and detector-source distance have been considered in the comparison analysis. The disc model is seen to be a better model as expected. However, the point source model is good for large collimator diameter and also when the distance from detector to source increases, although for smaller sizes of the collimator and lower distances a surface source model is necessary. PMID:16604596

  9. Microdosimetric calculation of penumbra for biological dose in wobbled carbon-ion beams with Monte Carlo Method.

    PubMed

    Tamura, Mikoto; Komori, Masataka; Oguchi, Hiroshi; Iwamoto, Yasushi; Rachi, Toshiya; Ota, Kenji; Hemmi, Atsushi; Shimozato, Tomohiro; Obata, Yasunori

    2013-07-01

    In carbon-ion radiotherapy, it is important to evaluate the biological dose because the relative biological effectiveness values vary greatly in a patient's body. The microdosimetric kinetic model (MKM) is a method of estimating the biological effect of radiation by use of microdosimetry. The lateral biological dose distributions were estimated with a modified MKM, in which we considered the overkilling effect in the high linear-energy-transfer region. In this study, we used the Monte Carlo calculation of the Geant4 code to simulate a horizontal port at the Heavy Ion Medical Accelerator in Chiba of the National Institute of Radiological Sciences. The lateral biological dose distributions calculated by Geant4 were almost flat as the lateral absorbed dose in the flattened area. However, in the penumbra region, the lateral biological dose distributions were sharper than the lateral absorbed dose distributions. Furthermore, the differences between the lateral absorbed dose and biological dose distributions were dependent on the depth for each multi-leaf collimator opening size. We expect that the lateral biological dose distribution presented here will enable high-precision calculations for a treatment-planning system. PMID:23616248

  10. Study on formation of step bunching on 6H-SiC (0001) surface by kinetic Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Li, Yuan; Chen, Xuejiang; Su, Juan

    2016-05-01

    The formation and evolution of step bunching during step-flow growth of 6H-SiC (0001) surfaces were studied by three-dimensional kinetic Monte Carlo (KMC) method and compared with the analytic model based on the theory of Burton-Cabera-Frank (BCF). In the KMC model the crystal lattice was represented by a structured mesh which fixed the position of atoms and interatomic bonding. The events considered in the model were adatoms adsorption and diffusion on the terrace, and adatoms attachment, detachment and interlayer transport at the step edges. In addition, effects of Ehrlich-Schwoebel (ES) barriers at downward step edges and incorporation barriers at upwards step edges were also considered. In order to obtain more elaborate information for the behavior of atoms in the crystal surface, silicon and carbon atoms were treated as the minimal diffusing species. KMC simulation results showed that multiple-height steps were formed on the vicinal surface oriented toward [ 1 1 bar 00 ] or [ 11 2 bar 0 ] directions. And then the formation mechanism of the step bunching was analyzed. Finally, to further analyze the formation processes of step bunching, a one-dimensional BCF analytic model with ES and incorporation barriers was used, and then it was solved numerically. In the BCF model, the periodic boundary conditions (PBC) were applied, and the parameters were corresponded to those used in the KMC model. The evolution character of step bunching was consistent with the results obtained by KMC simulation.

  11. Error propagation in the computation of volumes in 3D city models with the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Biljecki, F.; Ledoux, H.; Stoter, J.

    2014-11-01

    This paper describes the analysis of the propagation of positional uncertainty in 3D city models to the uncertainty in the computation of their volumes. Current work related to error propagation in GIS is limited to 2D data and 2D GIS operations, especially of rasters. In this research we have (1) developed two engines, one that generates random 3D buildings in CityGML in multiple LODs, and one that simulates acquisition errors to the geometry; (2) performed an error propagation analysis on volume computation based on the Monte Carlo method; and (3) worked towards establishing a framework for investigating error propagation in 3D GIS. The results of the experiments show that a comparatively small error in the geometry of a 3D city model may cause significant discrepancies in the computation of its volume. This has consequences for several applications, such as in estimation of energy demand and property taxes. The contribution of this work is twofold: this is the first error propagation analysis in 3D city modelling, and the novel approach and the engines that we have created can be used for analysing most of 3D GIS operations, supporting related research efforts in the future.

  12. Application of Distribution Transformer Thermal Life Models to Electrified Vehicle Charging Loads Using Monte-Carlo Method: Preprint

    SciTech Connect

    Kuss, M.; Markel, T.; Kramer, W.

    2011-01-01

    Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.

  13. A Novel Method for the Image Quality assessment of PET Scanners by Monte Carlo simulations: Effect of the scintillator

    NASA Astrophysics Data System (ADS)

    Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Kalyvas, N. I.; Valais, I. G.; Kandarakis, I. S.; Panayiotakis, G. S.

    2014-03-01

    The aim of the present study was to propose a comprehensive method for PET scanners image quality assessment, by the simulation of a thin layer chromatography (TLC) flood source with a previous validated Monte-Carlo (MC) model. The model was developed by using the GATE MC package and reconstructed images were obtained using the STIR software, with cluster computing. The PET scanner simulated was the GE Discovery-ST. The TLC source was immersed in 18F-FDG bath solution (1MBq) in order to assess image quality. The influence of different scintillating crystals on PET scanner's image quality, in terms of the MTF, the NNPS and the DQE, was investigated. Images were reconstructed by the commonly used FBP2D, FPB3DRP and the OSMAPOSL (15 subsets, 3 iterations) reprojection algorithms. The PET scanner configuration, incorporating LuAP crystals, provided the optimum MTF values in both 2D and 3DFBP whereas the corresponding configuration with BGO crystals was found with the higher MTF values after OSMAPOSL. The scanner incorporating BGO crystals were also found with the lowest noise levels and the highest DQE values after all image reconstruction algorithms. The plane source can be also useful for the experimental image quality assessment of PET and SPECT scanners in clinical practice.

  14. Impurity in a Bose-Einstein condensate: Study of the attractive and repulsive branch using quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Ardila, L. A. Pea; Giorgini, S.

    2015-09-01

    We investigate the properties of an impurity immersed in a dilute Bose gas at zero temperature using quantum Monte Carlo methods. The interactions between bosons are modeled by a hard-sphere potential with scattering length a , whereas the interactions between the impurity and the bosons are modeled by a short-range, square-well potential where both the sign and the strength of the scattering length b can be varied by adjusting the well depth. We characterize the attractive and the repulsive polaron branch by calculating the binding energy and the effective mass of the impurity. Furthermore, we investigate the structural properties of the bath, such as the impurity-boson contact parameter and the change of the density profile around the impurity. At the unitary limit of the impurity-boson interaction, we find that the effective mass of the impurity remains smaller than twice its bare mass, while the binding energy scales with ?2n2 /3/m , where n is the density of the bath and m is the common mass of the impurity and the bosons in the bath. The implications for the phase diagram of binary Bose-Bose mixtures at low concentrations are also discussed.

  15. Calculation of Nonlinear Thermoelectric Coefficients of InAs1-xSbx Using Monte Carlo Method

    SciTech Connect

    Sadeghian, RB; Bahk, JH; Bian, ZX; Shakouri, A

    2011-12-28

    It was found that the nonlinear Peltier effect could take place and increase the cooling power density when a lightly doped thermoelectric material is under a large electrical field. This effect is due to the Seebeck coefficient enhancement from an electron distribution far from equilibrium. In the nonequilibrium transport regime, the solution of the Boltzmann transport equation in the relaxation-time approximation ceases to apply. The Monte Carlo method, on the other hand, proves to be a capable tool for simulation of semiconductor devices at small scales as well as thermoelectric effects with local nonequilibrium charge distribution. InAs1-xSb is a favorable thermoelectric material for nonlinear operation owing to its high mobility inherited from the binary compounds InSb and InAs. In this work we report simulation results on the nonlinear Peltier power of InAs1-xSb at low doping levels, at room temperature and at low temperatures. The thermoelectric power factor in nonlinear operation is compared with the maximum value that can be achieved with optimal doping in the linear transport regime.

  16. Quatitative study of the separation of intrinsic and scattering attenuation in South Korea, using direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Chung, T.; Rachman, A.; Yoshimoto, K.

    2013-12-01

    For the separation of intrinsic (Qi-1) and scattering attenuation (Qs-1) in South Korea, the multiple-lapse time windows analysis using the direct simulation Monte Carlo (DSMC) method (Yoshimoto, 2000) showed that the depth-dependent velocity model divided by crust and mantle fit better than the uniform velocity model (Chung et al., 2010). Among the several models of S-wave velocity, the least residuals were observed for the discontinuous Moho model at 32 km with crustal velocity increasing from 3.5 to 3.8 km. Chung and Yoshimoto (2013), however, reported DSMC modeling with 10km source depth to be the smallest residuals corresponding to average focal depth of data set, and showed the effect of source events to be greater than that of Moho model. This study thus collected 330 ray paths originated from 39 events with around 10 km source depth in South Korea (Fig. 1), and analyzed by using DSMC method as the same way of Chung et al (2010). The substantial reduction value by changing source depth indicates an advantage of the DSMC model over the analytic model. As was the previous study, we confirmed that the residual difference of the Moho model is relatively very small compare to the source depth change. Based on this data, we will examine the focal mechanism effect which was previously failed to observe (Chung and Yoshimoto, 2012). References; Chung, T.W., K. Yoshimoto, and S. Yun, 2010, BSSA, 3183- 3193. Chung, T.W., and K. Yoshimoto, 2012, J.M.M.T, 85-91 (in Korean). Chung, T.W., and K. Yoshimoto, 2013, Geosciences J., in submitted. Yoshimoto, K., 2000, JGR, 6153-6161. Fig. 1. Ray paths of this study

  17. Bayesian Inversion of Soil-Plant-Atmosphere Interactions for an Oak-Savanna Ecosystem Using Markov Chain Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Chen, X.; Rubin, Y.; Baldocchi, D. D.

    2005-12-01

    Understanding the interactions between soil, plant, and the atmosphere under water-stressed conditions is important for ecosystems where water availability is limited. In such ecosystems, the amount of water transferred from the soil to the atmosphere is controlled not only by weather conditions and vegetation type but also by soil water availability. Although researchers have proposed different approaches to model the impact of soil moisture on plant activities, the parameters involved are difficult to measure. However, using measurements of observed latent heat and carbon fluxes, as well as soil moisture data, Bayesian inversion methods can be employed to estimate the various model parameters. In our study, actual Evapotranspiration (ET) of an ecosystem is approximated by the Priestley-Taylor relationship, with the Priestley-Taylor coefficient modeled as a function of soil moisture content. Soil moisture limitation on root uptake is characterized in a similar manner as the Feddes' model. The inference of Bayesian inversion is processed within the framework of graphical theories. Due to the difficulty of obtaining exact inference, the Markov chain Monte Carlo (MCMC) method is implemented using a free software package, BUGS (Bayesian inference Using Gibbs Sampling). The proposed methodology is applied to a Mediterranean Oak-Savanna FLUXNET site in California, where continuous measurements of actual ET are obtained from eddy-covariance technique and soil moisture contents are monitored by several time domain reflectometry probes located within the footprint of the flux tower. After the implementation of Bayesian inversion, the posterior distributions of all the parameters exhibit enhancement in information compared to the prior distributions. The generated samples based on data in year 2003 are used to predict the actual ET in year 2004 and the prediction uncertainties are assessed in terms of confidence intervals. Our tests also reveal the usefulness of various types of soil moisture data in parameter estimation, which could be used to guide analyses of available data and planning of field data collection activities.

  18. Monte Carlo Simulations Of The Dose Distributions From Carbon Microbeams Used In An Experimental Radiation Therapy Method

    NASA Astrophysics Data System (ADS)

    Dioszegi, I.; Rusek, A.; Dane, B. R.; Chiang, I. H.; Meek, A. G.; Dilmanian, F. A.

    2011-06-01

    Recent upgrades of the MCNPX Monte Carlo code include transport of heavy ions. We employed the new code to simulate the energy and dose distributions produced by carbon beams in rabbit's head in and around a brain tumor. The work was within our experimental technique of interlaced carbon microbeams, which uses two 90 arrays of parallel, thin planes of carbon beams (microbeams) interlacing to produce a solid beam at the target. A similar version of the method was earlier developed with synchrotron-generated x-ray microbeams. We first simulated the Bragg peak in high density polyethylene and other materials, where we could compare the calculated carbon energy deposition to the measured data produced at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL). The results showed that new MCNPX code gives a reasonable account of the carbon beam's dose up to 200 MeV/nucleon beam energy. At higher energies, which were not relevant to our project, the model failed to reproduce the Bragg-peak's extent of increasing nuclear breakup tail. In our model calculations we determined the dose distribution along the beam path, including the angular straggling of the microbeams, and used the data for determining the optimal values of beam spacing in the array for producing adequate beam interlacing at the target. We also determined, for the purpose of Bragg-peak spreading at the target, the relative beam intensities of the consecutive exposures with stepwise lower beam energies, and simulated the resulting dose distribution in the spread out Bragg-peak. The details of the simulation methods used and the results obtained are presented.

  19. A novel method combining Monte Carlo-FEM simulations and experiments for simultaneous evaluation of the ultrathin film mass density and Young's modulus

    NASA Astrophysics Data System (ADS)

    Zapoměl, J.; Stachiv, I.; Ferfecki, P.

    2016-01-01

    In this paper, a novel procedure of simultaneous measurement of the ultrathin film volumetric density and the Young's modulus utilizing the Monte Carlo probabilistic method combined with the finite-element method (FEM) and the experiments carried out on the suspended micro-/nanomechanical resonator with a deposited thin film under different but controllable axial prestresses is proposed and analyzed. Since the procedure requires detection of only two bending fundamental resonant frequencies of a beam under different axial prestress forces, the impacts of noise and damping on accuracy of the results are minimized and thus it essentially improves its reliability. Then the volumetric mass density and the Young's modulus of thin film are evaluated by means of the FEM based computational simulations and the accuracies of the determined values are estimated utilizing the Monte Carlo probabilistic method which has been incorporated into the computational procedure.

  20. Selection of voxel size and photon number in voxel-based Monte Carlo method: criteria and applications

    NASA Astrophysics Data System (ADS)

    Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan

    2015-09-01

    The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions.

  1. Norm-conserving diffusion Monte Carlo method and diagrammatic expansion of interacting drude oscillators: Application to solid xenon

    NASA Astrophysics Data System (ADS)

    Jones, Andrew; Thompson, Andrew; Crain, Jason; Müser, Martin H.; Martyna, Glenn J.

    2009-04-01

    The quantum Drude oscillator (QDO) model, which allows many-body polarization and dispersion to be treated both on an equal footing and beyond the dipole limit, is investigated using two approaches to the linear scaling diffusion Monte Carlo (DMC) technique. The first is a general purpose norm-conserving DMC (NC-DMC) method wherein the number of walkers, N , remains strictly constant thereby avoiding the sudden death or explosive growth of walker populations with an error that vanishes as O(N-1) in the absence of weights. As NC-DMC satisfies detailed balance, a phase space can be defined that permits both an exact trajectory weighting and a fast mean-field trajectory weighting technique to be constructed which can eliminate or reduce the population bias, respectively. The second is a many-body diagrammatic expansion for trial wave functions in systems dominated by strong on-site harmonic coupling and a dense matrix of bilinear coupling constants such as the QDO in the dipole limit; an approximate trial function is introduced to treat two-body interactions outside the dipole limit. Using these approaches, high accuracy is achieved in studies of the fcc-solid phase of the highly polarizable atom, xenon, within the QDO model. It is found that 200 walkers suffice to generate converged results for systems as large as 500 atoms. The quality of QDO predictions compared to experiment and the ability to generate these predictions efficiently demonstrate the feasibility of employing the QDO approach to model long-range forces.

  2. Selection of voxel size and photon number in voxel-based Monte Carlo method: criteria and applications.

    PubMed

    Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan

    2015-09-01

    The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions. PMID:26417866

  3. GPU-Accelerated Monte Carlo Electron Transport Methods: Development and Application for Radiation Dose Calculations Using Six GPU cards

    NASA Astrophysics Data System (ADS)

    Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George

    2014-06-01

    An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous EnviRonments - is being developed at Rensselaer Polytechnic Institute as a software testbed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. This paper presents the preliminary code development and the testing involving radiation dose related problems. In particular, the paper discusses the electron transport simulations using the class-II condensed history method. The considered electron energy ranges from a few hundreds of keV to 30 MeV. For photon part, photoelectric effect, Compton scattering and pair production were modeled. Voxelized geometry was supported. A serial CPU code was first written in C++. The code was then transplanted to the GPU using the CUDA C 5.0 standards. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla™ M2090 GPUs. The code was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and later dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x106 electron histories were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively. On-going work continues to test the code for different medical applications such as radiotherapy and brachytherapy.

  4. Genomewide Multipoint Linkage Analysis of Seven Extended Palauan Pedigrees with Schizophrenia, by a Markov-Chain Monte Carlo Method

    PubMed Central

    Camp, Nicola J.; Neuhausen, Susan L.; Tiobech, Josepha; Polloi, Anthony; Coon, Hilary; Myles-Worsley, Marina

    2001-01-01

    Palauans are an isolated population in Micronesia with lifetime prevalence of schizophrenia (SCZD) of 2%, compared to the world rate of ∼1%. The possible enrichment for SCZD genes, in conjunction with the potential for reduced etiological heterogeneity and the opportunity to ascertain statistically powerful extended pedigrees, makes Palauans a population of choice for the mapping of SCZD genes. We have used a Markov-chain Monte Carlo method to perform a genomewide multipoint analysis in seven extended pedigrees from Palau. Robust multipoint parametric and nonparametric linkage (NPL) analyses were performed under three nested diagnostic classifications—core, spectrum, and broad. We observed four regions of interest across the genome. Two of these regions—on chromosomes 2p13-14 (for which, under core diagnostic classification, NPL=6.5 and parametric LOD=4.8) and 13q12-22 (for which, under broad diagnostic classification, parametric LOD=3.6, and, under spectrum diagnostic classification, parametric LOD=3.5)—had evidence for linkage with genomewide significance, after correction for multiple testing; with the current pedigree resource and genotyping, these regions are estimated to be 4.3 cM and 19.75 cM in size, respectively. A third region, with intermediate evidence for linkage, was identified on chromosome 5q22-qter (for which, under broad diagnostic classification, parametric LOD=2.5). The fourth region of interest had only borderline suggestive evidence for linkage (on 3q24-28; for this region, under broad diagnostic classification, parametric LOD=2.0). All regions exhibited evidence for genetic heterogeneity. Our findings provide significant evidence for susceptibility loci on chromosomes 2p13-14 and 13q12-22 and support both a model of genetic heterogeneity and the utility of a broader set of diagnostic classifications in the population from Palau. PMID:11668428

  5. Spectrum simulation of rough and nanostructured targets from their 2D and 3D image by Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Schiettekatte, François; Chicoine, Martin

    2016-03-01

    Corteo is a program that implements Monte Carlo (MC) method to simulate ion beam analysis (IBA) spectra of several techniques by following the ions trajectory until a sufficiently large fraction of them reach the detector to generate a spectrum. Hence, it fully accounts for effects such as multiple scattering (MS). Here, a version of Corteo is presented where the target can be a 2D or 3D image. This image can be derived from micrographs where the different compounds are identified, therefore bringing extra information into the solution of an IBA spectrum, and potentially significantly constraining the solution. The image intrinsically includes many details such as the actual surface or interfacial roughness, or actual nanostructures shape and distribution. This can for example lead to the unambiguous identification of structures stoichiometry in a layer, or at least to better constraints on their composition. Because MC computes in details the trajectory of the ions, it simulates accurately many of its aspects such as ions coming back into the target after leaving it (re-entry), as well as going through a variety of nanostructures shapes and orientations. We show how, for example, as the ions angle of incidence becomes shallower than the inclination distribution of a rough surface, this process tends to make the effective roughness smaller in a comparable 1D simulation (i.e. narrower thickness distribution in a comparable slab simulation). Also, in ordered nanostructures, target re-entry can lead to replications of a peak in a spectrum. In addition, bitmap description of the target can be used to simulate depth profiles such as those resulting from ion implantation, diffusion, and intermixing. Other improvements to Corteo include the possibility to interpolate the cross-section in angle-energy tables, and the generation of energy-depth maps.

  6. Phase-coexistence simulations of fluid mixtures by the Markov Chain Monte Carlo method using single-particle models

    SciTech Connect

    Li, Jun; Calo, Victor M.

    2013-09-15

    We present a single-particle Lennard–Jones (L-J) model for CO{sub 2} and N{sub 2}. Simplified L-J models for other small polyatomic molecules can be obtained following the methodology described herein. The phase-coexistence diagrams of single-component systems computed using the proposed single-particle models for CO{sub 2} and N{sub 2} agree well with experimental data over a wide range of temperatures. These diagrams are computed using the Markov Chain Monte Carlo method based on the Gibbs-NVT ensemble. This good agreement validates the proposed simplified models. That is, with properly selected parameters, the single-particle models have similar accuracy in predicting gas-phase properties as more complex, state-of-the-art molecular models. To further test these single-particle models, three binary mixtures of CH{sub 4}, CO{sub 2} and N{sub 2} are studied using a Gibbs-NPT ensemble. These results are compared against experimental data over a wide range of pressures. The single-particle model has similar accuracy in the gas phase as traditional models although its deviation in the liquid phase is greater. Since the single-particle model reduces the particle number and avoids the time-consuming Ewald summation used to evaluate Coulomb interactions, the proposed model improves the computational efficiency significantly, particularly in the case of high liquid density where the acceptance rate of the particle-swap trial move increases. We compare, at constant temperature and pressure, the Gibbs-NPT and Gibbs-NVT ensembles to analyze their performance differences and results consistency. As theoretically predicted, the agreement between the simulations implies that Gibbs-NVT can be used to validate Gibbs-NPT predictions when experimental data is not available.

  7. Refinement of risk analysis procedures for trichloroethylene through the use of Monte Carlo method in conjunction with physiologically based pharmacokinetic modeling. Master's thesis

    SciTech Connect

    Cronin, W.J.; Oswald, E.J.

    1993-09-01

    This study refines risk analysis procedures for trichloroethylene (TCE) using a physiologically based pharmacokinetic (PBPK) model in conjunction with the Monte Carlo method. The Monte Carlo method is used to generate random sets of model parameters, based on the mean, variance, and distribution types. The procedure generates a range of exposure values for human excess lifetime cancer risk of lxl0 (exp-6), based on the upper and lower bounds and the mean of a 95% confidence interval. Risk ranges were produced for both ingestion and inhalation exposures. Results are presented in a graphical format to reduce reliance on qualitative discussions of uncertainty. A sensitivity analysis of the model was also performed. This method produced acceptable TCE exposures, for total amount TCE metabolized, greater than the Environmental Protection Agency's (EPA) by a factor of 23 for inhalation and a factor of 1.6 for ingestion. Sensitive parameters identified were the elimination rate constant, alveolar ventilation rate, and cardiac output. This procedure quantifies the uncertainty related to natural variations in parameter values. Its incorporation into risk assessment could be used to promulgate, and better present, more realistic standards.... Risk analysis, Physiologically based pharmacokinetics, Pbpk, Trichloroethylene, Monte carlo method.

  8. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    PubMed

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. PMID:20822794

  9. System Performance and Monte Carlo Analysis of Light Water Reactor Spent Fuel Assay Using Neutron Slowing Down Time Method

    NASA Astrophysics Data System (ADS)

    Abdurrahman, Naeem Mohamed

    There is a compelling safeguards need to assay nondestructively fissile plutonium from fissile uranium in spent light water reactor fuel. Present methods suffer from a number of limitations and are incapable of providing accurate and independent safeguards assay information. The only feasible method capable of performing the required assay of spent fuel is the slowing down time (SDT) method. The objectives of the present work include the demonstration of the lead slowing down time spectrometer (LSDTS) performance as a viable assay system and the investigation of its design parameters and characteristics. A fuel assembly replica was fabricated using 64 fuel pins of high density UO_2 at 4.8% enrichment. The assembly was designed to permit the insertion of small probe chambers. Assay measurements of the fuel assembly replica were carried out at the Rensselaer LSDTS facility with ^{238}U and ^{232}Th threshold fission detectors and two ^{235} U and ^{239}Pu probe chambers. Data were collected simultaneously for the assay detectors and probe chambers and were corrected for dead time counting losses. An assay model relating the assay signals and the signals of the probe chambers to the unknown masses of the fissile isotopes in the fuel assembly was developed. The probe chambers data were used to provide individual spectra of ^{235}U and ^{239}Pu inside the fuel assembly and to simulate spent fuel assay signals. Regression analyses were performed on the actual and the simulated spent fuel assay data. The fissile isotopic contents of the fuel were determined to better than 1% in both cases. Monte Carlo analysis were performed to simulate the experimental measurements, determine certain parameters of the LSDTS system and investigate the effect of the fuel assembly and hydrogen impurities on the performance of the system. The system was found to be very sensitive to hydrogen. The resolution broadening caused by 2 ppm of hydrogen uniformly distributed inside the lead pile was comparable to that produced by the presence of the fuel assembly. The broadened resolution of the system caused by the presence of the fuel was found to remain sufficient for the accurate and separate assay of spent fuel.

  10. On the limit theorem for life time distribution connected with some reliability systems and their validation by means of the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Gheorghe, Munteanu Bogdan; Alexei, Leahu; Sergiu, Cataranciuc

    2013-09-01

    We prove the limit theorem for life time distribution connected with reliability systems when their life time is a Pascal Convolution of independent and identically distributed random variables. We show that, in some conditions, such distributions may be approximated by means of Erlang distributions. As a consequnce, survival functions for such systems may be, respectively, approximated by Erlang survival functions. By using Monte Carlo method we experimantally confirm the theoretical results of our theorem.

  11. Substantiation of parameters of the geometric model of the research reactor core for the calculation using the Monte Carlo method

    SciTech Connect

    Radaev, A. I. Schurovskaya, M. V.

    2015-12-15

    The choice of the spatial nodalization for the calculation of the power density and burnup distribution in a research reactor core with fuel assemblies of the IRT-3M and VVR-KN type using the program based on the Monte Carlo code is described. The influence of the spatial nodalization on the results of calculating basic neutronic characteristics and calculation time is investigated.

  12. SU-E-T-224: Is Monte Carlo Dose Calculation Method Necessary for Cyberknife Brain Treatment Planning?

    SciTech Connect

    Wang, L; Fourkal, E; Hayes, S; Jin, L; Ma, C

    2014-06-01

    Purpose: To study the dosimetric difference resulted in using the pencil beam algorithm instead of Monte Carlo (MC) methods for tumors adjacent to the skull. Methods: We retrospectively calculated the dosimetric differences between RT and MC algorithms for brain tumors treated with CyberKnife located adjacent to the skull for 18 patients (total of 27 tumors). The median tumor sizes was 0.53-cc (range 0.018-cc to 26.2-cc). The absolute mean distance from the tumor to the skull was 2.11 mm (range - 17.0 mm to 9.2 mm). The dosimetric variables examined include the mean, maximum, and minimum doses to the target, the target coverage (TC) and conformality index. The MC calculation used the same MUs as the RT dose calculation without further normalization and 1% statistical uncertainty. The differences were analyzed by tumor size and distance from the skull. Results: The TC was generally reduced with the MC calculation (24 out of 27 cases). The average difference in TC between RT and MC was 3.3% (range 0.0% to 23.5%). When the TC was deemed unacceptable, the plans were re-normalized in order to increase the TC to 99%. This resulted in a 6.9% maximum change in the prescription isodose line. The maximum changes in the mean, maximum, and minimum doses were 5.4 %, 7.7%, and 8.4%, respectively, before re-normalization. When the TC was analyzed with regards to target size, it was found that the worst coverage occurred with the smaller targets (0.018-cc). When the TC was analyzed with regards to the distance to the skull, there was no correlation between proximity to the skull and TC between the RT and MC plans. Conclusions: For smaller targets (< 4.0-cc), MC should be used to re-evaluate the dose coverage after RT is used for the initial dose calculation in order to ensure target coverage.

  13. Radiation risk assessment in neonatal radiographic examinations of the chest and abdomen: a clinical and Monte Carlo dosimetry study

    NASA Astrophysics Data System (ADS)

    Makri, T.; Yakoumakis, E.; Papadopoulou, D.; Gialousis, G.; Theodoropoulos, V.; Sandilos, P.; Georgiou, E.

    2006-10-01

    Seeking to assess the radiation risk associated with radiological examinations in neonatal intensive care units, thermo-luminescence dosimetry was used for the measurement of entrance surface dose (ESD) in 44 AP chest and 28 AP combined chest-abdominal exposures of a sample of 60 neonates. The mean values of ESD were found to be equal to 44 16 Gy and 43 19 Gy, respectively. The MCNP-4C2 code with a mathematical phantom simulating a neonate and appropriate x-ray energy spectra were employed for the simulation of the AP chest and AP combined chest-abdominal exposures. Equivalent organ dose per unit ESD and energy imparted per unit ESD calculations are presented in tabular form. Combined with ESD measurements, these calculations yield an effective dose of 10.2 3.7 Sv, regardless of sex, and an imparted energy of 18.5 6.7 J for the chest radiograph. The corresponding results for the combined chest-abdominal examination are 14.7 7.6 Sv (males)/17.2 7.6 Sv (females) and 29.7 13.2 J. The calculated total risk per radiograph was low, ranging between 1.7 and 2.9 per million neonates, per film, and being slightly higher for females. Results of this study are in good agreement with previous studies, especially in view of the diversity met in the calculation methods.

  14. Monte Carlo validation of the self-attenuation correction determination with the Cutshall transmission method in ²¹⁰Pb measurements by gamma-spectrometry.

    PubMed

    Jodłowski, Paweł; Wachniew, Przemysław; Dinh, Chau Nguyen

    2014-05-01

    The accuracy of estimation of the self-attenuation correction Cs with the Cutshall transmission method in (210)Pb measurements by gamma-spectrometry was assessed using the Monte Carlo method. The Cutshall method overestimates the correction for samples with linear attenuation coefficient at 46.5 keV higher than that of the standard and underestimates it in the opposite case. The highest bias was found for thick samples. C(s,Cuts)/C(s) ratio grows linearly with sample linear attenuation coefficient. PMID:24387906

  15. Balancing Particle Diversity in Markov Chain Monte Carlo Methods for Dual Calibration-Data Assimilation Problems in Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Hernandez, F.; Liang, X.

    2014-12-01

    Given the inherent uncertainty in almost all of the variables involved, recent research is re-addressing the problem of calibrating hydrologic models from a stochastic perspective: the focus is shifting from finding a single parameter configuration that minimizes the model error, to approximating the maximum likelihood multivariate probability distribution of the parameters. To this end, Markov chain Monte Carlo (MCMC) formulations are widely used, where the distribution is defined as a smoothed ensemble of particles or members, each of which represents a feasible parameterization. However, the updating of these ensembles needs to strike a careful balance so that the particles adequately resemble the real distribution without either clustering or drifting excessively. In this study, we explore the implementation of two techniques that attempt to improve the quality of the resulting ensembles, both for the approximation of the model parameters and of the unknown states, in a dual calibration-data assimilation framework. The first feature of our proposed algorithm, in an effort to keep members from clustering on areas of high likelihood in light of the observations, is the introduction of diversity-inducing operators after each resampling. This approach has been successfully used before, and here we aim at testing additional operators which are also borrowed from the Evolutionary Computation literature. The second feature is a novel arrangement of the particles into two alternative data structures. The first one is a non-sorted Pareto population which favors 1) particles with high likelihood, and 2) particles that introduce a certain level of heterogeneity. The second structure is a partitioned array, in which each partition requires its members to have increasing levels of divergence from the particles in the areas of larger likelihood. Our newly proposed algorithm will be evaluated and compared to traditional MCMC methods in terms of convergence speed, and the ability of adequately representing the target probability distribution while making an efficient use of the available members. Two calibration scenarios will be carried out, one with invariant model parameter settings, and another one allowing the parameters to be modified through time along with the estimates of the model states.

  16. Assessment of the neutron activation of a stainless steel sample in a Research Nuclear Reactor using the Monte Carlo method and CINDER'90

    NASA Astrophysics Data System (ADS)

    Lázaro, Ignacio; Ródenas, José; Marques, José G.; Gallardo, Sergio

    2014-06-01

    Materials in a nuclear reactor are activated by neutron irradiation. When they are withdrawn from the reactor and placed in some storage, the potential dose received by workers in the surrounding area must be taken into account. In previous papers, activation of control rods in a NPP with BWR and dose rates around the storage pool have been estimated using the MCNP5 code based on the Monte Carlo method. Models were validated comparing simulation results with experimental measurements. As the activation is mostly produced in stainless steel components of control rods the activation model can be also validated by means of experimental measurements on a stainless steel sample after being irradiated in a reactor. This has been done in the Portuguese Research Reactor at Instituto Tecnológico e Nuclear. The neutron activation has been calculated by two different methods, Monte Carlo and CINDER'90, and results have been compared. After irradiation, dose rates at the water surface of the reactor pool were measured, with the irradiated stainless steel sample submerged at different positions under water. Experimental measurements have been compared with simulation results using Monte Carlo. The comparison shows a good agreement confirming the validation of models.

  17. Improved dose-calculation accuracy in proton treatment planning using a simplified Monte Carlo method verified with three-dimensional measurements in an anthropomorphic phantom

    NASA Astrophysics Data System (ADS)

    Hotta, Kenji; Kohno, Ryosuke; Takada, Yoshihisa; Hara, Yousuke; Tansho, Ryohei; Himukai, Takeshi; Kameoka, Satoru; Matsuura, Taeko; Nishio, Teiji; Ogino, Takashi

    2010-06-01

    Treatment planning for proton tumor therapy requires a fast and accurate dose-calculation method. We have implemented a simplified Monte Carlo (SMC) method in the treatment planning system of the National Cancer Center Hospital East for the double-scattering beam delivery scheme. The SMC method takes into account the scattering effect in materials more accurately than the pencil beam algorithm by tracking individual proton paths. We confirmed that the SMC method reproduced measured dose distributions in a heterogeneous slab phantom better than the pencil beam method. When applied to a complex anthropomorphic phantom, the SMC method reproduced the measured dose distribution well, satisfying an accuracy tolerance of 3 mm and 3% in the gamma index analysis. The SMC method required approximately 30 min to complete the calculation over a target volume of 500 cc, much less than the time required for the full Monte Carlo calculation. The SMC method is a candidate for a practical calculation technique with sufficient accuracy for clinical application.

  18. Improved dose-calculation accuracy in proton treatment planning using a simplified Monte Carlo method verified with three-dimensional measurements in an anthropomorphic phantom.

    PubMed

    Hotta, Kenji; Kohno, Ryosuke; Takada, Yoshihisa; Hara, Yousuke; Tansho, Ryohei; Himukai, Takeshi; Kameoka, Satoru; Matsuura, Taeko; Nishio, Teiji; Ogino, Takashi

    2010-06-21

    Treatment planning for proton tumor therapy requires a fast and accurate dose-calculation method. We have implemented a simplified Monte Carlo (SMC) method in the treatment planning system of the National Cancer Center Hospital East for the double-scattering beam delivery scheme. The SMC method takes into account the scattering effect in materials more accurately than the pencil beam algorithm by tracking individual proton paths. We confirmed that the SMC method reproduced measured dose distributions in a heterogeneous slab phantom better than the pencil beam method. When applied to a complex anthropomorphic phantom, the SMC method reproduced the measured dose distribution well, satisfying an accuracy tolerance of 3 mm and 3% in the gamma index analysis. The SMC method required approximately 30 min to complete the calculation over a target volume of 500 cc, much less than the time required for the full Monte Carlo calculation. The SMC method is a candidate for a practical calculation technique with sufficient accuracy for clinical application. PMID:20508320

  19. A Guide to Monte Carlo Simulations in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Landau, David P.; Binder, Kurt

    2014-11-01

    1. Introduction; 2. Some necessary background; 3. Simple sampling Monte Carlo methods; 4. Importance sampling Monte Carlo methods; 5. More on importance sampling Monte Carlo methods for lattice systems; 6. Off-lattice models; 7. Reweighting methods; 8. Quantum Monte Carlo methods; 9. Monte Carlo renormalization group methods; 10. Non-equilibrium and irreversible processes; 11. Lattice gauge models: a brief introduction; 12. A brief review of other methods of computer simulation; 13. Monte Carlo simulations at the periphery of physics and beyond; 14. Monte Carlo studies of biological molecules; 15. Outlook; Appendix: listing of programs mentioned in the text; Index.

  20. A Guide to Monte Carlo Simulations in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Landau, David P.; Binder, Kurt

    2013-11-01

    Preface; 1. Introduction; 2. Some necessary background; 3. Simple sampling Monte Carlo methods; 4. Importance sampling Monte Carlo methods; 5. More on importance sampling Monte Carlo methods of lattice systems; 6. Off-lattice models; 7. Reweighting methods; 8. Quantum Monte Carlo methods; 9. Monte Carlo renormalization group methods; 10. Non-equilibrium and irreversible processes; 11. Lattice gauge models: a brief introduction; 12. A brief review of other methods of computer simulation; 13. Monte Carlo simulations at the periphery of physics and beyond; 14. Monte Carlo studies of biological molecules; 15. Outlook; Appendix; Index.

  1. A Guide to Monte Carlo Simulations in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Landau, David P.; Binder, Kurt

    2009-09-01

    Preface; 1. Introduction; 2. Some necessary background; 3. Simple sampling Monte Carlo methods; 4. Importance sampling Monte Carlo methods; 5. More on importance sampling Monte Carlo methods of lattice systems; 6. Off-lattice models; 7. Reweighting methods; 8. Quantum Monte Carlo methods; 9. Monte Carlo renormalization group methods; 10. Non-equilibrium and irreversible processes; 11. Lattice gauge models: a brief introduction; 12. A brief review of other methods of computer simulation; 13. Monte Carlo simulations at the periphery of physics and beyond; 14. Monte Carlo studies of biological molecules; 15. Outlook; Appendix; Index.

  2. Estimating the impact of various pathway parameters on tenderness, flavour and juiciness of pork using Monte Carlo simulation methods.

    PubMed

    Channon, H A; Hamilton, A J; D'Souza, D N; Dunshea, F R

    2016-06-01

    Monte Carlo simulation was investigated as a potential methodology to estimate sensory tenderness, flavour and juiciness scores of pork following the implementation of key pathway interventions known to influence eating quality. Correction factors were established using mean data from published studies investigating key production, processing and cooking parameters. Probability distributions of correction factors were developed for single pathway parameters only, due to lack of interaction data. Except for moisture infusion, ageing period, aitchbone hanging and cooking pork to an internal temperature of >74°C, only small shifts in the mean of the probability distributions of correction factors were observed for the majority of pathway parameters investigated in this study. Output distributions of sensory scores, generated from Monte Carlo simulations of input distributions of correction factors and for individual pigs, indicated that this methodology may be useful in estimating both the shift and variability in pork eating traits when different pathway interventions are applied. PMID:26869282

  3. HRMC_2.1: Hybrid Reverse Monte Carlo method with silicon, carbon, germanium and silicon carbide potentials

    NASA Astrophysics Data System (ADS)

    Opletal, G.; Petersen, T. C.; Russo, S. P.

    2014-06-01

    The Hybrid Reverse Monte Carlo (HRMC) code models the atomic structure of materials via the use of a combination of constraints including experimental diffraction data and an empirical energy potential. In this version 2.1 update, an empirical potential for silicon-carbide has been added to the code along with an experimentally motivated constraint on the bond type fraction applicable to systems containing multiple elements.

  4. Comment on ``Simulation of a two-dimensional Rayleigh-Benard system using the direct simulation Monte Carlo method``

    SciTech Connect

    Garcia, A.L.; Baras, F.; Mansour, M.M.

    1994-06-30

    In a recent paper, Watanabe, {ital et. al.} used direct simulation Monte Carlo to study Rayleigh-B{acute e}nard convection. They reported that, using stress-free boundary conditions, the onset of convection in the simulation occurred at a Rayleigh number much larger than the critical Rayleigh number predicted by linear stability analysis. We show that the source of their discrepancy is their failure to include the temperature jump effect in the calculation of Rayleigh number.

  5. Using the Monte Carlo Markov Chain method to estimate contact parameter temperature dependence: implications for Martian cloud modelling

    NASA Astrophysics Data System (ADS)

    Määttänen, Anni; Douspis, Marian

    2015-04-01

    In the last years several datasets on deposition mode ice nucleation in Martian conditions have showed that the effectiveness of mineral dust as a condensation nucleus decreases with temperature (Iraci et al., 2010; Phebus et al., 2011; Trainer et al., 2009). Previously, nucleation modelling in Martian conditions used only constant values of this so-called contact parameter, provided by the few studies previously published on the topic. The new studies paved the way for possibly more realistic way of predicting ice crystal formation in the Martian environment. However, the caveat of these studies (Iraci et al., 2010; Phebus et al., 2011) was the limited temperature range that inhibits using the provided (linear) equations for the contact parameter temperature dependence in all conditions of cloud formation on Mars. One wide temperature range deposition mode nucleation dataset exists (Trainer et al., 2009), but the used substrate was silicon, which cannot imitate realistically the most abundant ice nucleus on Mars, mineral dust. Nevertheless, this dataset revealed, thanks to measurements spanning from 150 to 240 K, that the behaviour of the contact parameter as a function of temperature was exponential rather than linear as suggested by previous work. We have tried to combine the previous findings to provide realistic and practical formulae for application in nucleation and atmospheric models. We have analysed the three cited datasets using a Monte Carlo Markov Chain (MCMC) method. The used method allows us to test and evaluate different functional forms for the temperature dependence of the contact parameter. We perform a data inversion by finding the best fit to the measured data simultaneously at all points for different functional forms of the temperature dependence of the contact angle m(T). The method uses a full nucleation model (Määttänen et al., 2005; Vehkamäki et al., 2007) to calculate the observables at each data point. We suggest one new and test several m(T) dependencies. Two of these may be used to avoid unphysical behaviour (m > 1) when m(T) is implemented in heterogeneous nucleation and cloud models. However, more measurements are required to fully constrain the m(T) dependencies. We show the importance of large temperature range datasets for constraining the asymptotic behaviour of m(T), and we call for more experiments in a large temperature range with well-defined particle sizes or size distributions, for different IN types and nucleating vapours. This study (Määttänen and Douspis, 2014) provides a new framework for analysing heterogeneous nucleation datasets. The results provide, within limits of available datasets, well-behaving m(T) formulations for nucleation and cloud modelling. Iraci, L. T., et al. (2010). Icarus 210, 985-991. Määttänen, A., et al. (2005). J. Geophys. Res. 110, E02002. Määttänen, A. and Douspis, M. (2014). GeoResJ 3-4 , 46-55. Phebus, B. D., et al. (2011). J. Geophys. Res. 116, 4009. Trainer, M. G., et al. (2009). J. Phys. Chem C 113 , 2036-2040. Vehkamäki, H., et al. (2007). Atmos. Chem. Phys. 7, 309-313.

  6. Development of the MCNPX depletion capability: A Monte Carlo linked depletion method that automates the coupling between MCNPX and CINDER90 for high fidelity burnup calculations

    NASA Astrophysics Data System (ADS)

    Fensin, Michael Lorne

    Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established and supported radiation transport code, for further development of a Monte Carlo-linked depletion methodology which is essential to the future development of advanced reactor technologies that exceed the limitations of current deterministic based methods.

  7. Modeling the reflectance of the lunar regolith by a new method combining Monte Carlo Ray tracing and Hapke's model with application to Chang'E-1 IIM data.

    PubMed

    Wong, Un-Hong; Wu, Yunzhao; Wong, Hon-Cheng; Liang, Yanyan; Tang, Zesheng

    2014-01-01

    In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface. PMID:24526892

  8. Modeling the Reflectance of the Lunar Regolith by a New Method Combining Monte Carlo Ray Tracing and Hapke's Model with Application to Chang'E-1 IIM Data

    PubMed Central

    Wu, Yunzhao; Tang, Zesheng

    2014-01-01

    In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface. PMID:24526892

  9. A new NaI(Tl) four-detector layout for field contamination assessment using artificial neural networks and the Monte Carlo method for system calibration

    NASA Astrophysics Data System (ADS)

    Moreira, M. C. F.; Conti, C. C.; Schirru, R.

    2010-09-01

    An NaI(Tl) multidetector layout combined with the use of Monte Carlo (MC) calculations and artificial neural networks(ANN) is proposed to assess the radioactive contamination of urban and semi-urban environment surfaces. A very simple urban environment like a model street composed of a wall on either side and the ground surface was the study case. A layout of four NaI(Tl) detectors was used, and the data corresponding to the response of the detectors were obtained by the Monte Carlo method. Two additional data sets with random values for the contamination and for detectors' response were also produced to test the ANNs. For this work, 18 feedforward topologies with backpropagation learning algorithm ANNs were chosen and trained. The results showed that some trained ANNs were able to accurately predict the contamination on the three urban surfaces when submitted to values within the training range. Other results showed that generalization outside the training range of values could not be achieved. The use of Monte Carlo calculations in combination with ANNs has been proven to be a powerful tool to perform detection calibration for highly complicated detection geometries.

  10. Monte Carlo based method for conversion of in-situ gamma ray spectra obtained with a portable Ge detector to an incident photon flux energy distribution.

    PubMed

    Clouvas, A; Xanthos, S; Antonopoulos-Domis, M; Silva, J

    1998-02-01

    A Monte Carlo based method for the conversion of an in-situ gamma-ray spectrum obtained with a portable Ge detector to photon flux energy distribution is proposed. The spectrum is first stripped of the partial absorption and cosmic-ray events leaving only the events corresponding to the full absorption of a gamma ray. Applying to the resulting spectrum the full absorption efficiency curve of the detector determined by calibrated point sources and Monte Carlo simulations, the photon flux energy distribution is deduced. The events corresponding to partial absorption in the detector are determined by Monte Carlo simulations for different incident photon energies and angles using the CERN's GEANT library. Using the detector's characteristics given by the manufacturer as input it is impossible to reproduce experimental spectra obtained with point sources. A transition zone of increasing charge collection efficiency has to be introduced in the simulation geometry, after the inactive Ge layer, in order to obtain good agreement between the simulated and experimental spectra. The functional form of the charge collection efficiency is deduced from a diffusion model. PMID:9450590

  11. Calculating infinite-medium α-eigenvalue spectra with Monte Carlo using a transition rate matrix method

    SciTech Connect

    Betzler, Benjamin R.; Kiedrowski, Brian C.; Brown, Forrest B.; Martin, William R.

    2015-08-28

    The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing α eigenvalues and eigenvectors in an infinite medium. In this study, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.

  12. Development of a new ionisation chamber, for HP(10) measurement, using Monte-Carlo simulation and experimental methods.

    PubMed

    Silva, H; Cardoso, J; Oliveira, C

    2011-03-01

    An ionisation chamber that directly measures the quantity personal dose equivalent, H(p)(10), is used as a secondary standard in some metrology laboratories. An ionisation chamber of this type was first developed by Ankerhold. Using the Monte-Carlo simulation, the dose in the sensitive volume as a function of the IC dimensions and the effects of the several components of the ionising chamber have been investigated. Based on these results, a new ionising chamber, lighter than the previous ones, is constructed and experimentally tested. PMID:21208934

  13. Calculating infinite-medium α-eigenvalue spectra with Monte Carlo using a transition rate matrix method

    DOE PAGESBeta

    Betzler, Benjamin R.; Kiedrowski, Brian C.; Brown, Forrest B.; Martin, William R.

    2015-01-01

    The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing α eigenvalues and eigenvectors in an infinite medium. In this study, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.

  14. Germanene-like defects in Reverse Monte Carlo model of amorphous germanium revealed through new visualization method

    NASA Astrophysics Data System (ADS)

    Rahemtulla, Al; Tomberli, Bruno; Kim, Edward; Roorda, Sjoerd; Kycia, Stefan

    High Resolution x-ray diffraction experiments of amorphous germanium (a-Ge) revealed structural differences that cannot be reconciled with accepted theoretical models. An intuitive computational technique has been developed to construct 3D statistical density maps to directly resolve local atomic structure of a-Ge. A reverse monte carlo routine is used to compare the continuous random network model to the experimental model of a-Ge. Undercoordination in the refined model is shown to exist bimodally, as a 4-coordinated tetrahedron and a buckled 3-coordinated structure similar to germanene. These structures account for 95 . 7 % of the total atoms in an approximate 5:2 ratio respectively.

  15. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  16. Poster — Thur Eve — 45: Comparison of different Monte Carlo methods of scoring linear energy transfer in modulated proton therapy beams

    SciTech Connect

    Granville, DA; Sawakuchi, GO

    2014-08-15

    In this work, we demonstrate inconsistencies in commonly used Monte Carlo methods of scoring linear energy transfer (LET) in proton therapy beams. In particle therapy beams, the LET is an important parameter because the relative biological effectiveness (RBE) depends on it. LET is often determined using Monte Carlo techniques. We used a realistic Monte Carlo model of a proton therapy nozzle to score proton LET in spread-out Bragg peak (SOBP) depth-dose distributions. We used three different scoring and calculation techniques to determine average LET at varying depths within a 140 MeV beam with a 4 cm SOBP and a 250 MeV beam with a 10 cm SOBP. These techniques included fluence-weighted (Φ-LET) and dose-weighted average (D-LET) LET calculations from: 1) scored energy spectra converted to LET spectra through a lookup table, 2) directly scored LET spectra and 3) accumulated LET scored ‘on-the-fly’ during simulations. All protons (primary and secondary) were included in the scoring. Φ-LET was found to be less sensitive to changes in scoring technique than D-LET. In addition, the spectral scoring methods were sensitive to low-energy (high-LET) cutoff values in the averaging. Using cutoff parameters chosen carefully for consistency between techniques, we found variations in Φ-LET values of up to 1.6% and variations in D-LET values of up to 11.2% for the same irradiation conditions, depending on the method used to score LET. Variations were largest near the end of the SOBP, where the LET and energy spectra are broader.

  17. Patient-specific organ dose estimation during transcatheter arterial embolization using Monte Carlo method and adaptive organ segmentation

    NASA Astrophysics Data System (ADS)

    Tsai, Hui-Yu; Lin, Yung-Chieh; Tyan, Yeu-Sheng

    2014-11-01

    The purpose of this study was to evaluate organ doses for individual patients undergoing interventional transcatheter arterial embolization (TAE) for hepatocellular carcinoma (HCC) using measurement-based Monte Carlo simulation and adaptive organ segmentation. Five patients were enrolled in this study after institutional ethical approval and informed consent. Gafchromic XR-RV3 films were used to measure entrance surface dose to reconstruct the nonuniform fluence distribution field as the input data in the Monte Carlo simulation. XR-RV3 films were used to measure entrance surface doses due to their lower energy dependence compared with that of XR-RV2 films. To calculate organ doses, each patient's three-dimensional dose distribution was incorporated into CT DICOM images with image segmentation using thresholding and k-means clustering. Organ doses for all patients were estimated. Our dose evaluation system not only evaluated entrance surface doses based on measurements, but also evaluated the 3D dose distribution within patients using simulations. When film measurements were unavailable, the peak skin dose (between 0.68 and 0.82 of a fraction of the cumulative dose) can be calculated from the cumulative dose obtained from TAE dose reports. Successful implementation of this dose evaluation system will aid radiologists and technologists in determining the actual dose distributions within patients undergoing TAE.

  18. Harmonically trapped fermions in two dimensions: Ground-state energy and contact of SU(2) and SU(4) systems via a nonuniform lattice Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Luo, Zhihuan; Berger, Casey E.; Drut, Joaquín E.

    2016-03-01

    We study harmonically trapped, unpolarized fermion systems with attractive interactions in two spatial dimensions with spin degeneracies Nf=2 and 4 and N /Nf=1 ,3 ,5 , and 7 particles per flavor. We carry out our calculations using our recently proposed quantum Monte Carlo method on a nonuniform lattice. We report on the ground-state energy and contact for a range of couplings, as determined by the binding energy of the two-body system, and show explicitly how the physics of the Nf-body sector dominates as the coupling is increased.

  19. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators of use in cancer therapy

    NASA Astrophysics Data System (ADS)

    García-Pareja, S.; Vilches, M.; Lallena, A. M.

    2010-01-01

    The Monte Carlo simulation of clinical electron linear accelerators requires large computation times to achieve the level of uncertainty required for radiotherapy. In this context, variance reduction techniques play a fundamental role in the reduction of this computational time. Here we describe the use of the ant colony method to control the application of two variance reduction techniques: Splitting and Russian roulette. The approach can be applied to any accelerator in a straightforward way and permits the increasing of the efficiency of the simulation by a factor larger than 50.

  20. Monte Carlo solution methods in a moment-based scale-bridging algorithm for thermal radiative transfer problems: Comparison with Fleck and Cummings

    SciTech Connect

    Park, H.; Densmore, J. D.; Wollaber, A. B.; Knoll, D. A.; Rauenzahn, R. M.

    2013-07-01

    We have developed a moment-based scale-bridging algorithm for thermal radiative transfer problems. The algorithm takes the form of well-known nonlinear-diffusion acceleration which utilizes a low-order (LO) continuum problem to accelerate the solution of a high-order (HO) kinetic problem. The coupled nonlinear equations that form the LO problem are efficiently solved using a preconditioned Jacobian-free Newton-Krylov method. This work demonstrates the applicability of the scale-bridging algorithm with a Monte Carlo HO solver and reports the computational efficiency of the algorithm in comparison to the well-known Fleck-Cummings algorithm. (authors)