NASA Astrophysics Data System (ADS)
Zamani, M.; Kasesaz, Y.; Khalafi, H.; Pooya, S. M. Hosseini
Boron Neutron Capture Therapy (BNCT) is used for treatment of many diseases, including brain tumors, in many medical centers. In this method, a target area (e.g., head of patient) is irradiated by some optimized and suitable neutron fields such as research nuclear reactors. Aiming at protection of healthy tissues which are located in the vicinity of irradiated tissue, and based on the ALARA principle, it is required to prevent unnecessary exposure of these vital organs. In this study, by using numerical simulation method (MCNP4C Code), the absorbed dose in target tissue and the equiavalent dose in different sensitive tissues of a patiant treated by BNCT, are calculated. For this purpose, we have used the parameters of MIRD Standard Phantom. Equiavelent dose in 11 sensitive organs, located in the vicinity of target, and total equivalent dose in whole body, have been calculated. The results show that the absorbed dose in tumor and normal tissue of brain equal to 30.35 Gy and 0.19 Gy, respectively. Also, total equivalent dose in 11 sensitive organs, other than tumor and normal tissue of brain, is equal to 14 mGy. The maximum equivalent doses in organs, other than brain and tumor, appear to the tissues of lungs and thyroid and are equal to 7.35 mSv and 3.00 mSv, respectively.
Determination of {beta}{sub eff} using MCNP-4C2 and application to the CROCUS and PROTEUS reactors
Vollaire, J.; Plaschy, M.; Jatuff, F.; Chawla, R.
2006-07-01
A new Monte Carlo method for the determination of {beta}{sub eff} has been recently developed and tested using appropriate models of the experimental reactors CROCUS and PROTEUS. The current paper describes the applied methodology and highlights the resulting improvements compared to the simplest MCNP approach, i.e. the 'prompt method' technique. In addition, the flexibility advantages of the developed method are presented. Specifically, the possibility to obtain the effective delayed neutron fraction {beta}{sub eff} per delayed neutron group, per fissioning nuclide and per reactor region is illustrated. Finally, the MCNP predictions of {beta}{sub eff} are compared to the results of deterministic calculations. (authors)
Dawahra, S; Khattab, K; Saba, G
2015-10-01
A comparative study for fuel conversion from the HEU to LEU in the Miniature Neutron Source Reactor (MNSR) has been performed in this paper using the MCNP4C code. The neutron energy and lethargy flux spectra in the first inner and outer irradiation sites of the MNSR reactor for the existing HEU fuel (UAl4-Al, 90% enriched) and the potential LEU fuels (U3Si2-Al, U3Si-Al, U9Mo-Al, 19.75% enriched and UO2, 12.6% enriched) were investigated using the MCNP4C code. The neutron energy flux spectra for each group was calculated by dividing the neutron flux by the width of each energy group. The neutron flux spectra per unit lethargy was calculated by multiplying the neutron energy flux spectra for each energy group by the average energy of each group. The thermal neutron flux was calculated by summing the neutron fluxes from 0.0 to 0.625 eV, the fast neutron flux was calculated by summing the neutron fluxes from 0.5 MeV to 10 MeV for the existing HEU and potential LEU fuels. Good agreements have been noticed between the flux spectra for the potential LEU fuels and the existing HEU fuels with maximum relative differences less than 10% and 8% in the inner and outer irradiation sites. PMID:26142805
Eakins, J S; Bartlett, D T; Hager, L G; Molinos-Solsona, C; Tanner, R J
2008-01-01
The Health Protection Agency is changing from using detectors made from 7LiF:Mg,Ti in its photon/electron personal dosemeters, to 7LiF:Mg,Cu,P. Specifically, the Harshaw TLD-700H card is to be adopted. As a consequence of this change, the dosemeter holder is also being modified not only to accommodate the shape of the new card, but also to optimize the photon and electron response characteristics of the device. This redesign process was achieved using MCNP-4C2 and the kerma approximation, electron range/energy tables with additional electron transport calculations, and experimental validation, with different potential filters compared; the optimum filter studied was a polytetrafluoroethylene disc of diameter 18 mm and thickness 4.3 mm. Calculated relative response characteristics at different angles of incidence and energies between 16 and 6174 keV are presented for this new dosemeter configuration and compared with measured type-test results. A new estimate for the energy-dependent relative light conversion efficiency appropriate to the 7LiF:Mg,Cu,P was also derived for determining the correct dosemeter response. PMID:17951605
Bagheri, Reza; Afarideh, Hossein; Maragheh, Mohammad Ghannadi; Shirmardi, Seyed Pezhman; Samani, Ali Bahrami
2015-05-01
Bone metastases are major clinical concern that can cause severe problems for patients. Currently, various beta emitters are used for bone pain palliation. This study, describes the process for absorbed dose prediction of selected bone surface and volume-seeking beta emitter radiopharmaceuticals such as (32)P, (89)SrCl2,(90)Y-EDTMP,(153)Sm-EDTMP, (166)Ho-DOTMP, (177)Lu-EDTMP,(186)Re-HEDP, and (188)Re-HEDP in human bone, using MCNP code. Three coaxial sub-cylinders 5?cm in height and 1.2, 2.6, and 7.6?cm in diameter were used for bone marrow, bone, and muscle simulation respectively. The *F8 tally was employed to calculate absorbed dose in the MCNP4C simulations. Results show that with injection of 1?MBq of these radiopharmaceuticals given to a 70?kg adult man, (32)P, (89)SrCl2, and (90)Y-EDTMP radiopharmaceuticals will have the highest amount of bone surface absorbed dose, where beta particles will have the greatest proportion in absorbed dose of bone surface in comparison with gamma radiation. These results demonstrate moderate agreement with available experimental data. PMID:25775234
Comparison of Monte Carlo simulations of photon/electron dosimetry in microscale applications.
Joneja, O P; Negreanu, C; Stepanek, J; Chawl, R
2003-06-01
It is important to establish reliable calculational tools to plan and analyse representative microdosimetry experiments in the context of microbeam radiation therapy development. In this paper, an attempt has been made to investigate the suitability of the MCNP4C Monte Carlo code to adequately model photon/electron transport over micron distances. The case of a single cylindrical microbeam of 25-micron diameter incident on a water phantom has been simulated in detail with both MCNP4C and the code PSI-GEANT, for different incident photon energies, to get absorbed dose distributions at various depths, with and without electron transport being considered. In addition, dose distributions calculated for a single microbeam with a photon spectrum representative of the European Synchrotron Radiation Facility (ESRF) have been compared. Finally, a large number of cylindrical microbeams (a total of 2601 beams, placed on a 200-micron square pitch, covering an area of 1 cm2) incident on a water phantom have been considered to study cumulative radial dose distributions at different depths. From these distributions, ratios of peak (within the microbeam) to valley (mid-point along the diagonal connecting two microbeams) dose values have been determined. The various comparisons with PSI-GEANT results have shown that MCNP4C, with its high flexibility in terms of its numerous source and geometry description options, variance reduction methods, detailed error analysis, statistical checks and different tally types, can be a valuable tool for the analysis of microbeam experiments. PMID:12956187
NASA Astrophysics Data System (ADS)
Pauzi, A. M.
2013-06-01
The neutron transport code, Monte Carlo N-Particle (MCNP) which was wellkown as the gold standard in predicting nuclear reaction was used to model the small nuclear reactor core called "U-batteryTM", which was develop by the University of Manchester and Delft Institute of Technology. The paper introduces on the concept of modeling the small reactor core, a high temperature reactor (HTR) type with small coated TRISO fuel particle in graphite matrix using the MCNPv4C software. The criticality of the core were calculated using the software and analysed by changing key parameters such coolant type, fuel type and enrichment levels, cladding materials, and control rod type. The criticality results from the simulation were validated using the SCALE 5.1 software by [1] M Ding and J L Kloosterman, 2010. The data produced from these analyses would be used as part of the process of proposing initial core layout and a provisional list of materials for newly design reactor core. In the future, the criticality study would be continued with different core configurations and geometries.
Shell model Monte Carlo methods
Koonin, S.E.; Dean, D.J.
1996-10-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Applications of Monte Carlo Methods in Calculus.
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
Monte Carlo methods on advanced computer architectures
Martin, W.R.
1991-12-31
Monte Carlo methods describe a wide class of computational methods that utilize random numbers to perform a statistical simulation of a physical problem, which itself need not be a stochastic process. For example, Monte Carlo can be used to evaluate definite integrals, which are not stochastic processes, or may be used to simulate the transport of electrons in a space vehicle, which is a stochastic process. The name Monte Carlo came about during the Manhattan Project to describe the new mathematical methods being developed which had some similarity to the games of chance played in the casinos of Monte Carlo. Particle transport Monte Carlo is just one application of Monte Carlo methods, and will be the subject of this review paper. Other applications of Monte Carlo, such as reliability studies, classical queueing theory, molecular structure, the study of phase transitions, or quantum chromodynamics calculations for basic research in particle physics, are not included in this review. The reference by Kalos is an introduction to general Monte Carlo methods and references to other applications of Monte Carlo can be found in this excellent book. For the remainder of this paper, the term Monte Carlo will be synonymous to particle transport Monte Carlo, unless otherwise noted. 60 refs., 14 figs., 4 tabs.
Nedaie, H A; Mosleh-Shirazi, M A; Allahverdi, M
2013-01-01
Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162
Improved Monte Carlo Renormalization Group Method
DOE R&D Accomplishments Database
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
An enhanced Monte Carlo outlier detection method.
Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi
2015-09-30
Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc. PMID:26226927
Monte Carlo methods to calculate impact probabilities
NASA Astrophysics Data System (ADS)
Rickman, H.; Wi?niowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of pik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the pik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward infinity, while the Hill sphere method results in a severely underestimated probability. We provide a discussion of the reasons for these differences, and we finally present the results of the MOID method in the form of probability maps for the Earth and Mars on their current orbits. These maps show a relatively flat probability distribution, except for the occurrence of two ridges found at small inclinations and for coinciding projectile/target perihelion distances. Conclusions: Our results verify the standard formulae in the general case, away from the singularities. In fact, severe shortcomings are limited to the immediate vicinity of those extreme orbits. On the other hand, the new Monte Carlo methods can be used without excessive consumption of computer time, and the MOID method avoids the problems associated with the other methods. Appendices are available in electronic form at http://www.aanda.org
An introduction to Monte Carlo methods
NASA Astrophysics Data System (ADS)
Walter, J.-C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.
Density-matrix quantum Monte Carlo method
NASA Astrophysics Data System (ADS)
Blunt, N. S.; Rogers, T. W.; Spencer, J. S.; Foulkes, W. M. C.
2014-06-01
We present a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system at finite temperature. This allows arbitrary reduced density matrix elements and expectation values of complicated nonlocal observables to be evaluated easily. The method resembles full configuration interaction quantum Monte Carlo but works in the space of many-particle operators instead of the space of many-particle wave functions. One simulation provides the density matrix at all temperatures simultaneously, from T =? to T =0, allowing the temperature dependence of expectation values to be studied. The direct sampling of the density matrix also allows the calculation of some previously inaccessible entanglement measures. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices, the concurrence of one-dimensional spin rings, and the Renyi S2 entanglement entropy of various sublattices of the 66 Heisenberg model are calculated. The nature of the sign problem in the method is also investigated.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Discrete range clustering using Monte Carlo methods
NASA Technical Reports Server (NTRS)
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
NASA Astrophysics Data System (ADS)
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-07-01
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Calculating Pi Using the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Williamson, Timothy
2013-11-01
During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.
Methods for Monte Carlo simulations of biomacromolecules
Vitalis, Andreas; Pappu, Rohit V.
2010-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies. PMID:20428473
Dosimetry of gamma chamber blood irradiator using PAGAT gel dosimeter and Monte Carlo simulations.
Mohammadyari, Parvin; Zehtabian, Mehdi; Sina, Sedigheh; Tavasoli, Ali Reza; Faghihi, Reza
2014-01-01
Currently, the use of blood irradiation for inactivating pathogenic microbes in infected blood products and preventing graft-versus-host disease (GVHD) in immune suppressed patients is greater than ever before. In these systems, dose distribution and uniformity are two important concepts that should be checked. In this study, dosimetry of the gamma chamber blood irradiator model Gammacell 3000 Elan was performed by several dosimeter methods including thermoluminescence dosimeters (TLD), PAGAT gel dosimetry, and Monte Carlo simulations using MCNP4C code. The gel dosimeter was put inside a glass phantom and the TL dosimeters were placed on its surface, and the phantom was then irradiated for 5 min and 27 sec. The dose values at each point inside the vials were obtained from the magnetic resonance imaging of the phantom. For Monte Carlo simulations, all components of the irradiator were simulated and the dose values in a fine cubical lattice were calculated using tally F6. This study shows that PAGAT gel dosimetry results are in close agreement with the results of TL dosimetry, Monte Carlo simulations, and the results given by the vendor, and the percentage difference between the different methods is less than 4% at different points inside the phantom. According to the results obtained in this study, PAGAT gel dosimetry is a reliable method for dosimetry of the blood irradiator. The major advantage of this kind of dosimetry is that it is capable of 3D dose calculation. PMID:24423829
Accelerated Monte Carlo Methods for Coulomb Collisions
NASA Astrophysics Data System (ADS)
Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce
2014-03-01
We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
An assessment of the MCNP4C weight window
Christopher N. Culbertson; John S. Hendricks
1999-12-01
A new, enhanced weight window generator suite has been developed for MCNP version 4C. The new generator correctly estimates importances in either a user-specified, geometry-independent, orthogonal grid or in MCNP geometric cells. The geometry-independent option alleviates the need to subdivide the MCNP cell geometry for variance reduction purposes. In addition, the new suite corrects several pathologies in the existing MCNP weight window generator. The new generator is applied in a set of five variance reduction problems. The improved generator is compared with the weight window generator applied in MCNP4B. The benefits of the new methodology are highlighted, along with a description of its limitations. The authors also provide recommendations for utilization of the weight window generator.
Vectorized Monte Carlo methods for reactor lattice analysis
NASA Technical Reports Server (NTRS)
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Neutron spectral unfolding using the Monte Carlo method
NASA Astrophysics Data System (ADS)
O'Brien, Keran; Sanna, Robert
A solution to the neutron unfolding problem, without approximation or a priori assumptions as to spectral shape, has been devised, based on the Monte Carlo method, and its rate of convergence derived. By application to synthesized measurements with controlled and varying levels of error, the effect of measurement error has been investigated. This Monte Carlo method has also been applied to experimental stray neutron data from measurements inside a reactor containment vessel.
Monte Carlo methods and applications in nuclear physics
Carlson, J.
1990-01-01
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.
A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation
Li, Z.; Wang, K.
2012-07-01
Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)
Study of the Transition Flow Regime using Monte Carlo Methods
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
Observations on variational and projector Monte Carlo methods
NASA Astrophysics Data System (ADS)
Umrigar, C. J.
2015-10-01
Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.
Frequency domain optical tomography using a Monte Carlo perturbation method
NASA Astrophysics Data System (ADS)
Yamamoto, Toshihiro; Sakamoto, Hiroki
2016-04-01
A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.
Multiple-time-stepping generalized hybrid Monte Carlo methods
NASA Astrophysics Data System (ADS)
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
NASA Astrophysics Data System (ADS)
Demirkaya, Gokmen; Arin, Faruk; Seluk, Nevin; Ayranci, Isil
2005-06-01
Monte Carlo method was used to predict the incident radiative heat fluxes on the freeboard walls of the METU 0.3 MWt atmospheric bubbling fluidized bed combustor based on the data reported previously. The freeboard was treated as a rectangular enclosure with gray interior walls and gray, absorbing, emitting and isotropically scattering medium. A Monte Carlo solver was developed and the performance of the solver was assessed by comparing its predictions with those of method of lines solution of discrete ordinates method and experimental measurements reported previously. Parametric studies were carried out to examine the effects of particle load and anisotropic scattering on the predicted incident radiative heat fluxes. The comparisons show that Monte Carlo method reproduces the measured incident radiative heat fluxes reasonably well for the freeboard problem.
Bayesian Monte Carlo Method for Nuclear Data Evaluation
Koning, A.J.
2015-01-15
A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.
Monte Carlo method for magnetic impurities in metals
NASA Technical Reports Server (NTRS)
Hirsch, J. E.; Fye, R. M.
1986-01-01
The paper discusses a Monte Carlo algorithm to study properties of dilute magnetic alloys; the method can treat a small number of magnetic impurities interacting wiith the conduction electrons in a metal. Results for the susceptibility of a single Anderson impurity in the symmetric case show the expected universal behavior at low temperatures. Some results for two Anderson impurities are also discussed.
Mehranian, A.; Ay, M. R.; Alam, N. Riyahi; Zaidi, H.
2010-02-15
Purpose: The accurate prediction of x-ray spectra under typical conditions encountered in clinical x-ray examination procedures and the assessment of factors influencing them has been a long-standing goal of the diagnostic radiology and medical physics communities. In this work, the influence of anode surface roughness on diagnostic x-ray spectra is evaluated using MCNP4C-based Monte Carlo simulations. Methods: An image-based modeling method was used to create realistic models from surface-cracked anodes. An in-house computer program was written to model the geometric pattern of cracks and irregularities from digital images of focal track surface in order to define the modeled anodes into MCNP input file. To consider average roughness and mean crack depth into the models, the surface of anodes was characterized by scanning electron microscopy and surface profilometry. It was found that the average roughness (R{sub a}) in the most aged tube studied is about 50 {mu}m. The correctness of MCNP4C in simulating diagnostic x-ray spectra was thoroughly verified by calling its Gaussian energy broadening card and comparing the simulated spectra with experimentally measured ones. The assessment of anode roughness involved the comparison of simulated spectra in deteriorated anodes with those simulated in perfectly plain anodes considered as reference. From these comparisons, the variations in output intensity, half value layer (HVL), heel effect, and patient dose were studied. Results: An intensity loss of 4.5% and 16.8% was predicted for anodes aged by 5 and 50 {mu}m deep cracks (50 kVp, 6 deg. target angle, and 2.5 mm Al total filtration). The variations in HVL were not significant as the spectra were not hardened by more than 2.5%; however, the trend for this variation was to increase with roughness. By deploying several point detector tallies along the anode-cathode direction and averaging exposure over them, it was found that for a 6 deg. anode, roughened by 50 {mu}m deep cracks, the reduction in exposure is 14.9% and 13.1% for 70 and 120 kVp tube voltages, respectively. For the evaluation of patient dose, entrance skin radiation dose was calculated for typical chest x-ray examinations. It was shown that as anode roughness increases, patient entrance skin dose decreases averagely by a factor of 15%. Conclusions: It was concluded that the anode surface roughness can have a non-negligible effect on output spectra in aged x-ray imaging tubes and its impact should be carefully considered in diagnostic x-ray imaging modalities.
Monte Carlo methods for light propagation in biological tissues.
Vinckenbosch, Laura; Lacaux, Céline; Tindel, Samy; Thomassin, Magalie; Obara, Tiphaine
2015-11-01
Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis-Hastings algorithm. The resulting estimating methods are then compared to the so-called Wang-Prahl (or Wang) method. Finally, the formal representation allows to derive a non-linear optimization algorithm close to Levenberg-Marquardt that is used for the estimation of the scattering and absorption coefficients of the tissue from measurements. PMID:26362232
The All Particle Monte Carlo method: Atomic data files
Rathkopf, J.A.; Cullen, D.E.; Perkins, S.T.
1990-11-06
Development of the All Particle Method, a project to simulate the transport of particles via the Monte Carlo method, has proceeded on two fronts: data collection and algorithm development. In this paper we report on the status of the data libraries. The data collection is nearly complete with the addition of electron, photon, and atomic data libraries to the existing neutron, gamma ray, and charged particle libraries. The contents of these libraries are summarized.
Bayesian Monte Carlo method for nuclear data evaluation
NASA Astrophysics Data System (ADS)
Koning, A. J.
2015-12-01
A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.
Uncertainties in external dosimetry: analytical vs. Monte Carlo method.
Behrens, R
2010-03-01
Over the years, the International Commission on Radiological Protection (ICRP) and other organisations have formulated recommendations regarding uncertainty in occupational dosimetry. The most practical and widely accepted recommendations are the trumpet curves. To check whether routine dosemeters comply with them, a Technical Report on uncertainties issued by the International Electrotechnical Commission (IEC) can be used. In this report, the analytical method is applied to assess the uncertainty of a dosemeter fulfilling an IEC standard. On the other hand, the Monte Carlo method can be used to assess the uncertainty. In this work, a direct comparison of the analytical and the Monte Carlo methods is performed using the same input data. It turns out that the analytical method generally overestimates the uncertainty by about 10-30 %. Therefore, the results often do not comply with the recommendations of the ICRP regarding uncertainty. The results of the more realistic uncertainty evaluation using the Monte Carlo method usually comply with the recommendations of the ICRP. This is confirmed by results seen in regular tests in Germany. PMID:19942627
A new method for commissioning Monte Carlo treatment planning systems
NASA Astrophysics Data System (ADS)
Aljarrah, Khaled Mohammed
2005-11-01
The Monte Carlo method is an accurate method for solving numerical problems in different fields. It has been used for accurate radiation dose calculation for radiation treatment of cancer. However, the modeling of an individual radiation beam produced by a medical linear accelerator for Monte Carlo dose calculation, i.e., the commissioning of a Monte Carlo treatment planning system, has been the bottleneck for the clinical implementation of Monte Carlo treatment planning. In this study a new method has been developed to determine the parameters of the initial electron beam incident on the target for a clinical linear accelerator. The interaction of the initial electron beam with the accelerator target produces x-ray and secondary charge particles. After successive interactions in the linac head components, the x-ray photons and the secondary charge particles interact with the patient's anatomy and deliver dose to the region of interest. The determination of the initial electron beam parameters is important for estimating the delivered dose to the patients. These parameters, such as beam energy and radial intensity distribution, are usually estimated through a trial and error process. In this work an easy and efficient method was developed to determine these parameters. This was accomplished by comparing calculated 3D dose distributions for a grid of assumed beam energies and radii in a water phantom with measurements data. Different cost functions were studied to choose the appropriate function for the data comparison. The beam parameters were determined on the light of this method. Due to the assumption that same type of linacs are exactly the same in their geometries and only differ by the initial phase space parameters, the results of this method were considered as a source data to commission other machines of the same type.
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-01-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
Cluster Monte Carlo methods for the FePt Hamiltonian
NASA Astrophysics Data System (ADS)
Lyberatos, A.; Parker, G. J.
2016-02-01
Cluster Monte Carlo methods for the classical spin Hamiltonian of FePt with long range exchange interactions are presented. We use a combination of the Swendsen-Wang (or Wolff) and Metropolis algorithms that satisfies the detailed balance condition and ergodicity. The algorithms are tested by calculating the temperature dependence of the magnetization, susceptibility and heat capacity of L10-FePt nanoparticles in a range including the critical region. The cluster models yield numerical results in good agreement within statistical error with the standard single-spin flipping Monte Carlo method. The variation of the spin autocorrelation time with grain size is used to deduce the dynamic exponent of the algorithms. Our cluster models do not provide a more accurate estimate of the magnetic properties at equilibrium.
Fixed-node diffusion Monte Carlo method for lithium systems
NASA Astrophysics Data System (ADS)
Rasch, K. M.; Mitas, L.
2015-07-01
We study lithium systems over a range of a number of atoms, specifically atomic anion, dimer, metallic cluster, and body-centered-cubic crystal, using the fixed-node diffusion Monte Carlo method. The focus is on analysis of the fixed-node errors of each system, and for that purpose we test several orbital sets in order to provide the most accurate nodal hypersurfaces. The calculations include both core and valence electrons in order to avoid any possible impact by pseudopotentials. To quantify the fixed-node errors, we compare our results to other highly accurate calculations, and wherever available, to experimental observations. The results for these Li systems show that the fixed-node diffusion Monte Carlo method achieves accurate total energies, recovers 96 -99 % of the correlation energy, and estimates binding energies with errors bounded by 0.1 eV /at .
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. PMID:16381723
MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS
R. ESTEP; ET AL
2000-06-01
Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.
Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )
1991-01-01
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.
On the efficiency of algorithms of Monte Carlo methods
NASA Astrophysics Data System (ADS)
Budak, V. P.; Zheltov, V. S.; Lubenchenko, A. V.; Shagalov, O. V.
2015-11-01
A numerical comparison of algorithms for solving the radiative transfer equation by the Monte-Carlo method is performed for the direct simulation and local estimations. The problems of radiative transfer through a turbid medium slab in the scalar and vector case is considered. The case of reflections from the boundaries of the medium is analyzed. The calculations are performed in a wide variation of parameters of the medium. It is shown that the calculation time with the same accuracy for the local estimation method is less than one - two orders of magnitude.
A multi-scale Monte Carlo method for electrolytes
NASA Astrophysics Data System (ADS)
Liang, Yihao; Xu, Zhenli; Xing, Xiangjun
2015-08-01
Artifacts arise in the simulations of electrolytes using periodic boundary conditions (PBCs). We show the origin of these artifacts are the periodic image charges and the constraint of charge neutrality inside the simulation box, both of which are unphysical from the view point of real systems. To cure these problems, we introduce a multi-scale Monte Carlo (MC) method, where ions inside a spherical cavity are simulated explicitly, while ions outside are treated implicitly using a continuum theory. Using the method of Debye charging, we explicitly derive the effective interactions between ions inside the cavity, arising due to the fluctuations of ions outside. We find that these effective interactions consist of two types: (1) a constant cavity potential due to the asymmetry of the electrolyte, and (2) a reaction potential that depends on the positions of all ions inside. Combining the grand canonical Monte Carlo (GCMC) with a recently developed fast algorithm based on image charge method, we perform a multi-scale MC simulation of symmetric electrolytes, and compare it with other simulation methods, including PBC + GCMC method, as well as large scale MC simulation. We demonstrate that our multi-scale MC method is capable of capturing the correct physics of a large system using a small scale simulation.
Daures, J; Gouriou, J; Bordy, J M
2011-03-01
This work has been performed within the frame of the European Union ORAMED project (Optimisation of RAdiation protection for MEDical staff). The main goal of the project is to improve standards of protection for medical staff for procedures resulting in potentially high exposures and to develop methodologies for better assessing and for reducing, exposures to medical staff. The Work Package WP2 is involved in the development of practical eye-lens dosimetry in interventional radiology. This study is complementary of the part of the ENEA report concerning the calculations with the MCNP-4C code of the conversion factors related to the operational quantity H(p)(3). In this study, a set of energy- and angular-dependent conversion coefficients (H(p)(3)/K(a)), in the newly proposed square cylindrical phantom made of ICRU tissue, have been calculated with the Monte-Carlo code PENELOPE and MCNP5. The H(p)(3) values have been determined in terms of absorbed dose, according to the definition of this quantity, and also with the kerma approximation as formerly reported in ICRU reports. At a low-photon energy (up to 1 MeV), the two results obtained with the two methods are consistent. Nevertheless, large differences are showed at a higher energy. This is mainly due to the lack of electronic equilibrium, especially for small angle incidences. The values of the conversion coefficients obtained with the MCNP-4C code published by ENEA quite agree with the kerma approximation calculations obtained with PENELOPE. We also performed the same calculations with the code MCNP5 with two types of tallies: F6 for kerma approximation and *F8 for estimating the absorbed dose that is, as known, due to secondary electrons. PENELOPE and MCNP5 results agree for the kerma approximation and for the absorbed dose calculation of H(p)(3) and prove that, for photon energies larger than 1 MeV, the transport of the secondary electrons has to be taken into account. PMID:21242167
Stabilized multilevel Monte Carlo method for stiff stochastic differential equations
Abdulle, Assyr Blumenthal, Adrian
2013-10-15
A multilevel Monte Carlo (MLMC) method for mean square stable stochastic differential equations with multiple scales is proposed. For such problems, that we call stiff, the performance of MLMC methods based on classical explicit methods deteriorates because of the time step restriction to resolve the fastest scales that prevents to exploit all the levels of the MLMC approach. We show that by switching to explicit stabilized stochastic methods and balancing the stabilization procedure simultaneously with the hierarchical sampling strategy of MLMC methods, the computational cost for stiff systems is significantly reduced, while keeping the computational algorithm fully explicit and easy to implement. Numerical experiments on linear and nonlinear stochastic differential equations and on a stochastic partial differential equation illustrate the performance of the stabilized MLMC method and corroborate our theoretical findings.
Improved criticality convergence via a modified Monte Carlo iteration method
Booth, Thomas E; Gubernatis, James E
2009-01-01
Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.
A simple eigenfunction convergence acceleration method for Monte Carlo
Booth, Thomas E
2010-11-18
Monte Carlo transport codes typically use a power iteration method to obtain the fundamental eigenfunction. The standard convergence rate for the power iteration method is the ratio of the first two eigenvalues, that is, k{sub 2}/k{sub 1}. Modifications to the power method have accelerated the convergence by explicitly calculating the subdominant eigenfunctions as well as the fundamental. Calculating the subdominant eigenfunctions requires using particles of negative and positive weights and appropriately canceling the negative and positive weight particles. Incorporating both negative weights and a {+-} weight cancellation requires a significant change to current transport codes. This paper presents an alternative convergence acceleration method that does not require modifying the transport codes to deal with the problems associated with tracking and cancelling particles of {+-} weights. Instead, only positive weights are used in the acceleration method.
Zeinali-Rafsanjani, B; Mosleh-Shirazi, M A; Faghihi, R; Karbasi, S; Mosalaei, A
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
Zeinali-Rafsanjani, B.; Mosleh-Shirazi, M. A.; Faghihi, R.; Karbasi, S.; Mosalaei, A.
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
Analysis of real-time networks with monte carlo methods
NASA Astrophysics Data System (ADS)
Mauclair, C.; Durrieu, G.
2013-12-01
Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.
Application of Monte Carlo methods in tomotherapy and radiation biophysics
NASA Astrophysics Data System (ADS)
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published experimental and theoretical studies for 60Co gamma-rays and low-energy x-rays. The reported studies provide new information about the potential biological consequences of diagnostic x-rays and selected gamma-emitting radioisotopes used in brachytherapy for the treatment of cancer. The proposed methodology is computationally efficient and may also be useful in proton therapy, space applications or internal dosimetry.
ITER Neutronics Modeling Using Hybrid Monte Carlo/Deterministic and CAD-Based Monte Carlo Methods
Ibrahim, A.; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Sawan, M.; Wilson, P.; Wagner, John C; Heltemes, Thad
2011-01-01
The immense size and complex geometry of the ITER experimental fusion reactor require the development of special techniques that can accurately and efficiently perform neutronics simulations with minimal human effort. This paper shows the effect of the hybrid Monte Carlo (MC)/deterministic techniques - Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) - in enhancing the efficiency of the neutronics modeling of ITER and demonstrates the applicability of coupling these methods with computer-aided-design-based MC. Three quantities were calculated in this analysis: the total nuclear heating in the inboard leg of the toroidal field coils (TFCs), the prompt dose outside the biological shield, and the total neutron and gamma fluxes over a mesh tally covering the entire reactor. The use of FW-CADIS in estimating the nuclear heating in the inboard TFCs resulted in a factor of ~ 275 increase in the MC figure of merit (FOM) compared with analog MC and a factor of ~ 9 compared with the traditional methods of variance reduction. By providing a factor of ~ 21 000 increase in the MC FOM, the radiation dose calculation showed how the CADIS method can be effectively used in the simulation of problems that are practically impossible using analog MC. The total flux calculation demonstrated the ability of FW-CADIS to simultaneously enhance the MC statistical precision throughout the entire ITER geometry. Collectively, these calculations demonstrate the ability of the hybrid techniques to accurately model very challenging shielding problems in reasonable execution times.
A monte carlo method for generating side chain structural ensembles.
Bhowmick, Asmit; Head-Gordon, Teresa
2015-01-01
We report a Monte Carlo side chain entropy (MC-SCE) method that uses a physical energy function inclusive of long-range electrostatics and hydrophobic potential of mean force, coupled with both backbone variations and a backbone dependent side chain rotamer library, to describe protein conformational ensembles. Using the MC-SCE method in conjunction with backbone variability, we can reliably determine the side chain rotamer populations derived from both room temperature and cryogenically cooled X-ray crystallographic structures for CypA and H-Ras andNMR J-coupling constants for CypA, Eglin-C, and the DHFR product binary complexes E:THF and E:FOL. Furthermore, we obtain near perfect discrimination between a protein's native state ensemble and ensembles of misfolded structures for 55 different proteins, thereby generating far more competitive side chain packings for all of these proteins and their misfolded states. PMID:25482539
Frozen core method in auxiliary-field quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Purwanto, Wirawan; Zhang, Shiwei; Krakauer, Henry
2012-02-01
We present the implementation of the frozen-core approach in the phaseless auxiliary-field quantum Monte Carlo method (AFQMC). Since AFQMC random walks take place in a many-electron Hilbert space spanned by a chosen one-particle basis, this approach can be achieved without introducing additional approximations, such as pseudopotentials. In parallel to many-body quantum chemistry methods, tightly-bound inner electrons occupy frozen canonical orbitals, which are determined from a lower level of theory, e.g. Hartree-Fock or CASSCF. This provides significant computational savings over fully correlated all-electron treatments, while retaining excellent transferability and accuracy. Results for several systems will be presented. This includes the notoriously difficult Cr2 molecule, where comparisons can be made with near-exact results in small basis sets, as well as an initial implementation in periodic systems.
Importance sampling based direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Vedula, Prakash; Otten, Dustin
2010-11-01
We propose a novel and efficient approach, termed as importance sampling based direct simulation Monte Carlo (ISDSMC), for prediction of nonequilibrium flows via solution of the Boltzmann equation. Besides leading to a reduction in computational cost, ISDSMC also results in a reduction in statistical scatter compared to conventional direct simulation Monte Carlo (DSMC) and hence appears to be potentially useful for prediction of a variety of flows, especially where the signal to noise ratio is small (e.g. microflows) . In this particle in cell approach, the computational particles are initially assigned weights (or importance) based on constraints on generalized moments of velocity. Solution of the Boltzmann equation is achieved by use of (i) a streaming operator which streams the computational particles and (ii) a collision operator where the representative collision pairs are selected stochastically based on particle weights via an acceptance-rejection algorithm. Performance of ISDSMC approach is evaluated using analysis of three canonical microflows, namely (i) thermal Couette flow, (ii) velocity-slip Couette flow and (iii) Poiseulle flow. Our results based on ISDSMC indicate good agreement with those obtained from conventional DSMC methods. The potential advantages of this (ISDSMC) approach to granular flows will also be demonstrated using simulations of homogeneous relaxation of a granular gas.
A modified Monte Carlo 'local importance function transform' method
Keady, K. P.; Larsen, E. W.
2013-07-01
The Local Importance Function Transform (LIFT) method uses an approximation of the contribution transport problem to bias a forward Monte-Carlo (MC) source-detector simulation [1-3]. Local (cell-based) biasing parameters are calculated from an inexpensive deterministic adjoint solution and used to modify the physics of the forward transport simulation. In this research, we have developed a new expression for the LIFT biasing parameter, which depends on a cell-average adjoint current to scalar flux (J{sup *}/{phi}{sup *}) ratio. This biasing parameter differs significantly from the original expression, which uses adjoint cell-edge scalar fluxes to construct a finite difference estimate of the flux derivative; the resulting biasing parameters exhibit spikes in magnitude at material discontinuities, causing the original LIFT method to lose efficiency in problems with high spatial heterogeneity. The new J{sup *}/{phi}{sup *} expression, while more expensive to obtain, generates biasing parameters that vary smoothly across the spatial domain. The result is an improvement in simulation efficiency. A representative test problem has been developed and analyzed to demonstrate the advantage of the updated biasing parameter expression with regards to solution figure of merit (FOM). For reference, the two variants of the LIFT method are compared to a similar variance reduction method developed by Depinay [4, 5], as well as MC with deterministic adjoint weight windows (WW). (authors)
Monte Carlo free energy calculations using electronic structure methods.
Matusek, Daniel R; Osborne, Sbastien; St-Amant, Alain
2008-04-21
The molecular mechanics-based importance sampling function (MMBIF) algorithm [R. Iftimie, D. Salahub, D. Wei, and J. Schofield, J. Chem. Phys. 113, 4852 (2000)] is extended to incorporate semiempirical electronic structure methods in the secondary Markov chain, creating a fully quantum mechanical Monte Carlo sampling method for simulations of reactive chemical systems which, unlike the MMBIF algorithm, does not require the generation of a system-specific force field. The algorithm is applied to calculating the potential of mean force for the isomerization reaction of HCN using thermodynamic integration. Constraints are implemented in the sampling using a modification of the SHAKE algorithm, including that of a fixed, arbitrary reaction coordinate. Simulation results show that sampling efficiency with the semiempirical secondary potential is often comparable in quality to force fields constructed using the methods suggested in the original MMBIF work. The semiempirical based importance sampling method presented here is a useful alternative to MMBIF sampling as it can be applied to systems for which no suitable MM force field can be constructed. PMID:18433193
Underwater Optical Wireless Channel Modeling Using Monte-Carlo Method
NASA Astrophysics Data System (ADS)
Saini, P. Sri; Prince, Shanthi
2011-10-01
At present, there is a lot of interest in the functioning of the marine environment. Unmanned or Autonomous Underwater Vehicles (UUVs or AUVs) are used in the exploration of the underwater resources, pollution monitoring, disaster prevention etc. Underwater, where radio waves do not propagate, acoustic communication is being used. But, underwater communication is moving towards Optical Communication which has higher bandwidth when compared to Acoustic Communication but has shorter range comparatively. Underwater Optical Wireless Communication (OWC) is mainly affected by the absorption and scattering of the optical signal. In coastal waters, both inherent and apparent optical properties (IOPs and AOPs) are influenced by a wide array of physical, biological and chemical processes leading to optical variability. The scattering effect has two effects: the attenuation of the signal and the Inter-Symbol Interference (ISI) of the signal. However, the Inter-Symbol Interference is ignored in the present paper. Therefore, in order to have an efficient underwater OWC link it is necessary to model the channel efficiently. In this paper, the underwater optical channel is modeled using Monte-Carlo method. The Monte Carlo approach provides the most general and most flexible technique for numerically solving the equations of Radiative transfer. The attenuation co-efficient of the light signal is studied as a function of the absorption (a) and scattering (b) coefficients. It has been observed that for pure sea water and for less chlorophyll conditions blue wavelength is less absorbed whereas for chlorophyll rich environment red wavelength signal is absorbed less comparative to blue and green wavelength.
Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination
Liu, B; Xu, J; Liu, T; Ouyang, X
2012-01-01
Objective To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Methods Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or γ-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a 252Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV D–D neutron generator can create neutrons at up to 1013 n s−1 with current technology. All these enable an effective and low-cost method of killing anthrax spores. Results There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. Conclusion The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g 252Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV D–D neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV D–D neutron generator output >1013 n s−1 should be attainable in the near future. This indicates that we could use a D–D neutron generator to sterilise anthrax contamination within several seconds. PMID:22573293
Markov chain Monte Carlo methods: an introductory example
NASA Astrophysics Data System (ADS)
Klauenberg, Katy; Elster, Clemens
2016-02-01
When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis–Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis–Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.
Hierarchical Monte Carlo methods for fractal random fields
Elliott, F.W. Jr.; Majda, A.J.; Horntrop, D.J.
1995-11-01
Two hierarchical Monte Carlo methods for the generation of self-similar fractal random fields are compared and contrasted. The first technique, successive random addition (SRA), is currently popular in the physics community. Despite the intuitive appeal of SRA, rigorous mathematical reasoning reveals that SRA cannot be consistent with any stationary power-law Gaussian random field for any Hurst exponent; furthermore, there is an inherent ratio of largest to smallest putative scaling constant necessarily exceeding a factor of 2 for a wide range of Hurst exponents H, with 0.30 < H < 0.85. Thus, SRA is inconsistent with a stationary power-law fractal random field and would not be useful for problems that do not utilize additional spatial averaging of the velocity field. The second hierarchial method for fractal random fields has recently been introduced by two of the authors and relies on a suitable explicit multiwavelet expansion (MWE) with high-moment cancellation. This method is described briefly, including a demonstration that, unlike SRA, MWE is consistent with a stationary power-law random field over many decades of scaling and has low variance.
Neutronic analysis code for fuel assembly using a vectorized Monte Carlo method
Morimoto, Y.; Maruyama, H.; Ishii, K.; Aoyama, M. . Energy Research Lab.)
1989-12-01
A fuel assembly analysis code, VMONT, in which a multigroup neutron transport calculation is combined with a burnup calculation, has been developed for comprehensive design work use. The neutron transport calculation is performed with a vectorized Monte Carlo method that can realize speeds {gt}10 times faster than those of a scalar Monte Carlo method. The validity of the VMONT code is shown through test calculations against continuous energy Monte Carlo calculations and the proteus tight lattice experiment.
MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD
K. HANSON
2001-02-01
The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.
Seriation in paleontological data using markov chain Monte Carlo methods.
Puolamäki, Kai; Fortelius, Mikael; Mannila, Heikki
2006-02-01
Given a collection of fossil sites with data about the taxa that occur in each site, the task in biochronology is to find good estimates for the ages or ordering of sites. We describe a full probabilistic model for fossil data. The parameters of the model are natural: the ordering of the sites, the origination and extinction times for each taxon, and the probabilities of different types of errors. We show that the posterior distributions of these parameters can be estimated reliably by using Markov chain Monte Carlo techniques. The posterior distributions of the model parameters can be used to answer many different questions about the data, including seriation (finding the best ordering of the sites) and outlier detection. We demonstrate the usefulness of the model and estimation method on synthetic data and on real data on large late Cenozoic mammals. As an example, for the sites with large number of occurrences of common genera, our methods give orderings, whose correlation with geochronologic ages is 0.95. PMID:16477311
Spike inference from calcium imaging using sequential Monte Carlo methods.
Vogelstein, Joshua T; Watson, Brendon O; Packer, Adam M; Yuste, Rafael; Jedynak, Bruno; Paninski, Liam
2009-07-22
As recent advances in calcium sensing technologies facilitate simultaneously imaging action potentials in neuronal populations, complementary analytical tools must also be developed to maximize the utility of this experimental paradigm. Although the observations here are fluorescence movies, the signals of interest--spike trains and/or time varying intracellular calcium concentrations--are hidden. Inferring these hidden signals is often problematic due to noise, nonlinearities, slow imaging rate, and unknown biophysical parameters. We overcome these difficulties by developing sequential Monte Carlo methods (particle filters) based on biophysical models of spiking, calcium dynamics, and fluorescence. We show that even in simple cases, the particle filters outperform the optimal linear (i.e., Wiener) filter, both by obtaining better estimates and by providing error bars. We then relax a number of our model assumptions to incorporate nonlinear saturation of the fluorescence signal, as well external stimulus and spike history dependence (e.g., refractoriness) of the spike trains. Using both simulations and in vitro fluorescence observations, we demonstrate temporal superresolution by inferring when within a frame each spike occurs. Furthermore, the model parameters may be estimated using expectation maximization with only a very limited amount of data (e.g., approximately 5-10 s or 5-40 spikes), without the requirement of any simultaneous electrophysiology or imaging experiments. PMID:19619479
Medical Imaging Image Quality Assessment with Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.
Treatment planning aspects and Monte Carlo methods in proton therapy
NASA Astrophysics Data System (ADS)
Fix, Michael K.; Manser, Peter
2015-05-01
Over the last years, the interest in proton radiotherapy is rapidly increasing. Protons provide superior physical properties compared with conventional radiotherapy using photons. These properties result in depth dose curves with a large dose peak at the end of the proton track and the finite proton range allows sparing the distally located healthy tissue. These properties offer an increased flexibility in proton radiotherapy, but also increase the demand in accurate dose estimations. To carry out accurate dose calculations, first an accurate and detailed characterization of the physical proton beam exiting the treatment head is necessary for both currently available delivery techniques: scattered and scanned proton beams. Since Monte Carlo (MC) methods follow the particle track simulating the interactions from first principles, this technique is perfectly suited to accurately model the treatment head. Nevertheless, careful validation of these MC models is necessary. While for the dose estimation pencil beam algorithms provide the advantage of fast computations, they are limited in accuracy. In contrast, MC dose calculation algorithms overcome these limitations and due to recent improvements in efficiency, these algorithms are expected to improve the accuracy of the calculated dose distributions and to be introduced in clinical routine in the near future.
Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Quantum Monte Carlo methods and lithium cluster properties
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Latent uncertainties of the precalculated track Monte Carlo method
Renaud, Marc-André; Seuntjens, Jan; Roberge, David
2015-01-15
Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.
The simulation of the recharging method of active medical implant based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Kong, Xianyue; Song, Yong; Hao, Qun; Cao, Jie; Zhang, Xiaoyu; Dai, Pantao; Li, Wansong
2014-11-01
The recharging of Active Medical Implant (AMI) is an important issue for its future application. In this paper, a method for recharging active medical implant using wearable incoherent light source has been proposed. Firstly, the models of the recharging method are developed. Secondly, the recharging processes of the proposed method have been simulated by using Monte Carlo (MC) method. Finally, some important conclusions have been reached. The results indicate that the proposed method will help to result in a convenient, safe and low-cost recharging method of AMI, which will promote the application of this kind of implantable device.
Backward and Forward Monte Carlo Method in Polarized Radiative Transfer
NASA Astrophysics Data System (ADS)
Yong, Huang; Guo-Dong, Shi; Ke-Yong, Zhu
2016-03-01
In general, the Stocks vector cannot be calculated in reverse in the vector radiative transfer. This paper presents a novel backward and forward Monte Carlo simulation strategy to study the vector radiative transfer in the participated medium. A backward Monte Carlo process is used to calculate the ray trajectory and the endpoint of the ray. The Stocks vector is carried out by a forward Monte Carlo process. A one-dimensional graded index semi-transparent medium was presented as the physical model and the thermal emission consideration of polarization was studied in the medium. The solution process to non-scattering, isotropic scattering, and the anisotropic scattering medium, respectively, is discussed. The influence of the optical thickness and albedo on the Stocks vector are studied. The results show that the U, V-components of the apparent Stocks vector are very small, but the Q-component of the apparent Stocks vector is relatively larger, which cannot be ignored.
Uncertainty analysis for fluorescence tomography with Monte Carlo method
NASA Astrophysics Data System (ADS)
Reinbacher-Kstinger, Alice; Freiberger, Manuel; Scharfetter, Hermann
2011-07-01
Fluorescence tomography seeks to image an inaccessible fluorophore distribution inside an object like a small animal by injecting light at the boundary and measuring the light emitted by the fluorophore. Optical parameters (e.g. the conversion efficiency or the fluorescence life-time) of certain fluorophores depend on physiologically interesting quantities like the pH value or the oxygen concentration in the tissue, which allows functional rather than just anatomical imaging. To reconstruct the concentration and the life-time from the boundary measurements, a nonlinear inverse problem has to be solved. It is, however, difficult to estimate the uncertainty of the reconstructed parameters in case of iterative algorithms and a large number of degrees of freedom. Uncertainties in fluorescence tomography applications arise from model inaccuracies, discretization errors, data noise and a priori errors. Thus, a Markov chain Monte Carlo method (MCMC) was used to consider all these uncertainty factors exploiting Bayesian formulation of conditional probabilities. A 2-D simulation experiment was carried out for a circular object with two inclusions. Both inclusions had a 2-D Gaussian distribution of the concentration and constant life-time inside of a representative area of the inclusion. Forward calculations were done with the diffusion approximation of Boltzmann's transport equation. The reconstruction results show that the percent estimation error of the lifetime parameter is by a factor of approximately 10 lower than that of the concentration. This finding suggests that lifetime imaging may provide more accurate information than concentration imaging only. The results must be interpreted with caution, however, because the chosen simulation setup represents a special case and a more detailed analysis remains to be done in future to clarify if the findings can be generalized.
Markov chain Monte Carlo posterior sampling with the Hamiltonian method.
Hanson, Kenneth M.
2001-01-01
A major advantage of Bayesian data analysis is that provides a characterization of the uncertainty in the model parameters estimated from a given set of measurements in the form of a posterior probability distribution. When the analysis involves a complicated physical phenomenon, the posterior may not be available in analytic form, but only calculable by means of a simulation code. In such cases, the uncertainty in inferred model parameters requires characterization of a calculated functional. An appealing way to explore the posterior, and hence characterize the uncertainty, is to employ the Markov Chain Monte Carlo technique. The goal of MCMC is to generate a sequence random of parameter x samples from a target pdf (probability density function), {pi}(x). In Bayesian analysis, this sequence corresponds to a set of model realizations that follow the posterior distribution. There are two basic MCMC techniques. In Gibbs sampling, typically one parameter is drawn from the conditional pdf at a time, holding all others fixed. In the Metropolis algorithm, all the parameters can be varied at once. The parameter vector is perturbed from the current sequence point by adding a trial step drawn randomly from a symmetric pdf. The trial position is either accepted or rejected on the basis of the probability at the trial position relative to the current one. The Metropolis algorithm is often employed because of its simplicity. The aim of this work is to develop MCMC methods that are useful for large numbers of parameters, n, say hundreds or more. In this regime the Metropolis algorithm can be unsuitable, because its efficiency drops as 0.3/n. The efficiency is defined as the reciprocal of the number of steps in the sequence needed to effectively provide a statistically independent sample from {pi}.
Kinetic Monte Carlo method applied to nucleic acid hairpin folding
NASA Astrophysics Data System (ADS)
Sauerwine, Ben; Widom, Michael
2011-12-01
Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure, is advantageous for being able to access behavior at long time scales, even minutes or hours. Transition rates between coarse-grained states depend upon intermediate barriers, which are not directly simulated. We propose an Arrhenius rate model and an intermediate energy model that incorporates the effects of the barrier between simulated states without enlarging the state space itself. Applying our Arrhenius rate model to DNA hairpin folding, we demonstrate improved agreement with experiment compared to the usual kinetic Monte Carlo model. Further improvement results from including rigidity of single-stranded stacking.
NASA Astrophysics Data System (ADS)
Vargas Verdesoto, M. X.; Álvarez Romero, J. T.
2003-09-01
To characterize an ionization chamber BEV-CC01 as a standard of absorbed dose to water Dw at SSDL-Mexico, the approach developed by the BIPM for 60Co gamma radiation, [1] has been chosen. This requires the estimation of a factor kp, which stems from the perturbation introduced by the presence of the ionization chamber in the water phantom, and due to finite size of the cavity. This factor is the product of four terms: ψw,c, (μen/ρ)w,c, (1 + μ'.ȳ)w,c and kcav. Two independent determinations are accomplished using a combination of the Monte Carlo code MCNP4C in ITS mode [2,3] and analytic methods: one kp∥=1.1626 ± uc=: 0.90% for the chamber axis parallel to the beam axis; and another kp =1.1079± uc=0.89% for the chamber axis perpendicular to the beam axis. The variance reduction techniques: splitting-Russian roulette, source biasing and forced photon collisions are employed in the simulations to improve the calculation efficiency. The energy fluence for the 60Co housing-source Picker C/9 is obtained by realistic Monte Carlo (MC) simulation, it is verified by comparison of MC calculated and measured beam output air kerma factors, and percent depth dose curves in water, PDD. This spectrum is considered as input energy for a point source (74% is from primary photons and the rest 26% is from scattered radiation) in the determination of the kp factors. Details of the calculations are given together with the theoretical basis of the ionometric standard employed.
Review of quantum Monte Carlo methods and results for Coulombic systems
Ceperley, D.
1983-01-27
The various Monte Carlo methods for calculating ground state energies are briefly reviewed. Then a summary of the charged systems that have been studied with Monte Carlo is given. These include the electron gas, small molecules, a metal slab and many-body hydrogen.
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Franke, B. C.; Prinja, A. K.
2013-07-01
The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)
Evaluation of measurement uncertainty and its numerical calculation by a Monte Carlo method
NASA Astrophysics Data System (ADS)
Wbbeler, Gerd; Krystek, Michael; Elster, Clemens
2008-08-01
The Guide to the Expression of Uncertainty in Measurement (GUM) is the de facto standard for the evaluation of measurement uncertainty in metrology. Recently, evaluation of measurement uncertainty has been proposed on the basis of probability density functions (PDFs) using a Monte Carlo method. The relation between this PDF approach and the standard method described in the GUM is outlined. The Monte Carlo method required for the numerical calculation of the PDF approach is described and illustrated by its application to two examples. The results obtained by the Monte Carlo method for the two examples are compared to the corresponding results when applying the GUM.
An analysis method for evaluating gradient-index fibers based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Yoshida, S.; Horiuchi, S.; Ushiyama, Z.; Yamamoto, M.
2011-05-01
We propose a numerical analysis method for evaluating gradient-index (GRIN) optical fiber using the Monte Carlo method. GRIN optical fibers are widely used in optical information processing and communication applications, such as an image scanner, fax machine, optical sensor, and so on. An important factor which decides the performance of GRIN optical fiber is modulation transfer function (MTF). The MTF of a fiber is swayed by condition of manufacturing process such as temperature. Actual measurements of the MTF of a GRIN optical fiber using this method closely match those made by conventional methods. Experimentally, the MTF is measured using a square wave chart, and is then calculated based on the distribution of output strength on the chart. In contrast, the general method using computers evaluates the MTF based on a spot diagram made by an incident point light source. But the results differ greatly from those by experiment. In this paper, we explain the manufacturing process which affects the performance of GRIN optical fibers and a new evaluation method similar to the experimental system based on the Monte Carlo method. We verified that it more closely matches the experimental results than the conventional method.
Hariri, Sanaz; Shahriari, Majid
2010-01-01
Due to intensive use of multileaf collimators (MLCs) in clinics, finding an optimum design for the leaves becomes essential. There are several studies which deal with comparison of MLC systems, but there is no article with a focus on offering an optimum design using accurate methods like Monte Carlo. In this study, we describe some characteristics of MLC systems including the leaf tip transmission, beam hardening, leakage radiation and penumbra width for Varian and Elekta 80-leaf MLCs using MCNP4C code. The complex geometry of leaves in these two common MLC systems was simulated. It was assumed that all of the MLC systems were mounted on a Varian accelerator and with a similar thickness as Varian's and the same distance from the source. Considering the obtained results from Varian and Elekta leaf designs, an optimum design was suggested combining the advantages of three common MLC systems and the simulation results of this proposed one were compared with the Varian and the Elekta. The leakage from suggested design is 29.7% and 31.5% of the Varian and Elekta MLCs. In addition, other calculated parameters of the proposed MLC leaf design were better than those two commercial ones. Although it shows a wider penumbra in comparison with Varian and Elekta MLCs, taking into account the curved motion path of the leaves, providing a double focusing design will solve the problem. The suggested leaf design is a combination of advantages from three common vendors (Varian, Elekta and Siemens) which can show better results than each one. Using the results of this theoretical study may bring about superior practical outcomes. PMID:20717079
Efficient, Automated Monte Carlo Methods for Radiation Transport
Kong, Rong; Ambrose, Martin; Spanier, Jerome
2012-01-01
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872
Efficient, automated Monte Carlo methods for radiation transport
Kong Rong; Ambrose, Martin; Spanier, Jerome
2008-11-20
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.
NASA Technical Reports Server (NTRS)
Firstenberg, H.
1971-01-01
The statistics are considered of the Monte Carlo method relative to the interpretation of the NUGAM2 and NUGAM3 computer code results. A numerical experiment using the NUGAM2 code is presented and the results are statistically interpreted.
Frequency-domain deviational Monte Carlo method for linear oscillatory gas flows
NASA Astrophysics Data System (ADS)
Ladiges, Daniel R.; Sader, John E.
2015-10-01
Oscillatory non-continuum low Mach number gas flows are often generated by nanomechanical devices in ambient conditions. These flows can be simulated using a range of particle based Monte Carlo techniques, which in their original form operate exclusively in the time-domain. Recently, a frequency-domain weight-based Monte Carlo method was proposed [D. R. Ladiges and J. E. Sader, "Frequency-domain Monte Carlo method for linear oscillatory gas flows," J. Comput. Phys. 284, 351-366 (2015)] that exhibits superior statistical convergence when simulating oscillatory flows. This previous method used the Bhatnagar-Gross-Krook (BGK) kinetic model and contains a "virtual-time" variable to maintain the inherent time-marching nature of existing Monte Carlo algorithms. Here, we propose an alternative frequency-domain deviational Monte Carlo method that facilitates the use of a wider range of molecular models and more efficient collision/relaxation operators. We demonstrate this method with oscillatory Couette flow and the flow generated by an oscillating sphere, utilizing both the BGK kinetic model and hard sphere particles. We also discuss how oscillatory motion of arbitrary time-dependence can be simulated using computationally efficient parallelization. As in the weight-based method, this deviational frequency-domain Monte Carlo method is shown to offer improved computational speed compared to the equivalent time-domain technique.
NASA Astrophysics Data System (ADS)
Lodwick, Camille J.
This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.
Kasesaz, Y; Khalafi, H; Rahmani, F
2013-12-01
Optimization of the Beam Shaping Assembly (BSA) has been performed using the MCNP4C Monte Carlo code to shape the 2.45 MeV neutrons that are produced in the D-D neutron generator. Optimal design of the BSA has been chosen by considering in-air figures of merit (FOM) which consists of 70 cm Fluental as a moderator, 30 cm Pb as a reflector, 2mm (6)Li as a thermal neutron filter and 2mm Pb as a gamma filter. The neutron beam can be evaluated by in-phantom parameters, from which therapeutic gain can be derived. Direct evaluation of both set of FOMs (in-air and in-phantom) is very time consuming. In this paper a Response Matrix (RM) method has been suggested to reduce the computing time. This method is based on considering the neutron spectrum at the beam exit and calculating contribution of various dose components in phantom to calculate the Response Matrix. Results show good agreement between direct calculation and the RM method. PMID:23954283
Krylov-Projected Quantum Monte Carlo Method.
Blunt, N S; Alavi, Ali; Booth, George H
2015-07-31
We present an approach to the calculation of arbitrary spectral, thermal, and excited state properties within the full configuration interaction quzantum Monte Carlo framework. This is achieved via an unbiased projection of the Hamiltonian eigenvalue problem into a space of stochastically sampled Krylov vectors, thus, enabling the calculation of real-frequency spectral and thermal properties and avoiding explicit analytic continuation. We use this approach to calculate temperature-dependent properties and one- and two-body spectral functions for various Hubbard models, as well as isolated excited states in ab initio systems. PMID:26274406
NASA Astrophysics Data System (ADS)
Wang, Mengkuo
In particle transport computations, the Monte Carlo simulation method is a widely used algorithm. There are several Monte Carlo codes available that perform particle transport simulations. However the geometry packages and geometric modeling capability of Monte Carlo codes are limited as they can not handle complicated geometries made up of complex surfaces. Previous research exists that take advantage of the modeling capabilities of CAD software. The two major approaches are the Converter approach and the CAD engine based approach. By carefully analyzing the strategies and algorithms of these two approaches, the CAD engine based approach has peen identified as the more promising approach. Though currently the performance of this approach is not satisfactory, there is room for improvement. The development and implementation of an improved CAD based approach is the focus of this thesis. Algorithms to accelerate the CAD engine based approach are studied. The major acceleration algorithm is the Oriented Bounding Box algorithm, which is used in computer graphics. The difference in application between computer graphics and particle transport has been considered and the algorithm has been modified for particle transport. The major work of this thesis has been the development of the MCNPX/CGM code and the testing, benchmarking and implementation of the acceleration algorithms. MCNPX is a Monte Carlo code and CGM is a CAD geometry engine. A facet representation of the geometry provided the least slowdown of the Monte Carlo code. The CAD model generates the facet representation. The Oriented Bounding Box algorithm was the fastest acceleration technique adopted for this work. The slowdown of the MCNPX/CGM to MCNPX was reduced to a factor of 3 when the facet model is used. MCNPX/CGM has been successfully validated against test problems in medical physics and a fusion energy device. MCNPX/CGM gives exactly the same results as the standard MCNPX when an MCNPX geometry model is available. For the case of the complicated fusion device---the stellerator, the MCNPX/CGM's results closely match a one-dimension model calculation performed by ARIES team.
APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula
Hwang, M.; Bae, S.; Chung, B. D.
2012-07-01
An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)
Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement
Siswantoro, Joko; Idrus, Bahari
2014-01-01
Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method. PMID:24892069
Ultracold atoms at unitarity within quantum Monte Carlo methods
Morris, Andrew J.; Lopez Rios, P.; Needs, R. J.
2010-03-15
Variational and diffusion quantum Monte Carlo (VMC and DMC) calculations of the properties of the zero-temperature fermionic gas at unitarity are reported. Our study differs from earlier ones mainly in that we have constructed more accurate trial wave functions and used a larger system size, we have studied the dependence of the energy on the particle density and well width, and we have achieved much smaller statistical error bars. The correct value of the universal ratio of the energy of the interacting to that of the noninteracting gas, {xi}, is still a matter of debate. We find DMC values of {xi} of 0.4244(1) with 66 particles and 0.4339(1) with 128 particles. The spherically averaged pair-correlation functions, momentum densities, and one-body density matrices are very similar in VMC and DMC, which suggests that our results for these quantities are very accurate. We find, however, some differences between the VMC and DMC results for the two-body density matrices and condensate fractions, which indicates that these quantities are more sensitive to the quality of the trial wave function. Our best estimate of the condensate fraction of 0.51 is smaller than the values from earlier quantum Monte Carlo calculations.
A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT
Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Loeffler, Frank; Schnetter, Erik
2012-08-20
Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.
Heterogeneity in ultrathin films simulated by Monte Carlo method
NASA Astrophysics Data System (ADS)
Sun, Jiebing; Hannon, James B.; Kellogg, Gary L.; Pohl, Karsten
2007-03-01
The 3D composition profile of ultra-thin Pd films on Cu(001) has been experimentally determined using low energy electron microscopy (LEEM).^[1] Quantitative measurements of the alloy concentration profile near steps show that the Pd distribution in the 3^rd layer is heterogeneous due to step overgrowth during Pd deposition. Interestingly, the Pd distribution in the 2^nd layer is also heterogeneous, and appears to be correlated with the distribution in the 1^st layer. We describe Monte Carlo simulations that show that correlation is due to Cu-Pd attraction, and that the 2^nd layer Pd is, in fact, laterally equilibrated. By comparing measured and simulated concentration profiles, we can estimate this attraction within a simple bond counting model. [1] J. B. Hannon, J. Sun, K. Pohl, G. L. Kellogg, Phys. Rev. Lett. 96, 246103 (2006)
Monte Carlo Methods to Model Radiation Interactions and Induced Damage
NASA Astrophysics Data System (ADS)
Muñoz, Antonio; Fuss, Martina C.; Cortés-Giraldo, M. A.; Incerti, Sébastien; Ivanchenko, Vladimir; Ivanchenko, Anton; Quesada, J. M.; Salvat, Francesc; Champion, Christophe; Gómez-Tejedor, Gustavo García
This review is devoted to the analysis of some Monte Carlo (MC) simulation programmes which have been developed to describe radiation interaction with biologically relevant materials. Current versions of the MC codes Geant4 (GEometry ANd Tracking 4), PENELOPE (PENetration and Energy Loss of Positrons and Electrons), EPOTRAN (Electron and POsitron TRANsport), and LEPTS (Low-Energy Particle Track Simulation) are described. Mean features of each model, as the type of radiation to consider, the energy range covered by primary and secondary particles, the type of interactions included in the simulation and the considered target geometries are discussed. Special emphasis lies on recent developments that, together with (still emerging) new databases that include adequate data for biologically relevant materials, bring us continuously closer to a realistic, physically meaningful description of radiation damage in biological tissues.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I
2014-06-15
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.
A New Method for the Calculation of Diffusion Coefficients with Monte Carlo
NASA Astrophysics Data System (ADS)
Dorval, Eric
2014-06-01
This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods.
A Monte Carlo Synthetic-Acceleration Method for Solving the Thermal Radiation Diffusion Equation
Evans, Thomas M; Mosher, Scott W; Slattery, Stuart
2014-01-01
We present a novel synthetic-acceleration based Monte Carlo method for solving the equilibrium thermal radiation diusion equation in three dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that not only can our Monte Carlo method be an eective solver for sparse matrix systems, but also that it performs competitively with deterministic methods including preconditioned Conjugate Gradient while producing numerically identical results. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.
NASA Astrophysics Data System (ADS)
Jacqmin, Dustin J.
Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990's for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of pre-computed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient. The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and proved to be just as accurate and more efficient. This work has the potential to accelerate light modeling for both photodynamic therapy and near-infrared spectroscopic imaging.
Markov chain Monte Carlo method for tracking myocardial borders
NASA Astrophysics Data System (ADS)
Janiczek, Robert; Ray, N.; Acton, Scott T.; Roy, R. J.; French, Brent A.; Epstein, F. H.
2005-03-01
Cardiac magnetic resonance studies have led to a greater understanding of the pathophysiology of ischemic heart disease. Manual segmentation of myocardial borders, a major task in the data analysis of these studies, is a tedious and time consuming process subject to observer bias. Automated segmentation reduces the time needed to process studies and removes observer bias. We propose an automated segmentation algorithm that uses an active contour to capture the endo- and epicardial borders of the left ventricle in a mouse heart. The contour is initialized by computing the ellipse corresponding to the maximal gradient inverse of variation (GICOV) value. The GICOV is the mean divided by the normalized standard deviation of the image intensity gradient in the outward normal direction along the contour. The GICOV is maximal when the contour lies along strong, relatively constant gradients. The contour is then evolved until it maximizes the GICOV value subject to shape constraints. The problem is formulated in a Bayesian framework and is implemented using a Markov Chain Monte Carlo technique.
Matching NLO QCD with parton shower in Monte Carlo scheme the KrkNLO method
NASA Astrophysics Data System (ADS)
Jadach, S.; P?aczek, W.; Sapeta, S.; Sidmok, A.; Skrzypek, M.
2015-10-01
A new method of including the complete NLO QCD corrections to hard processes in the LO parton-shower Monte Carlo (PSMC) is presented. This method, called KrkNLO, requires the use of parton distribution functions in a dedicated Monte Carlo (MC) factorization scheme, which is also discussed in this paper. In the future, it may simplify introduction of the NNLO corrections to hard processes and the NLO corrections to PSMC. Details of the method and numerical examples of its practical implementation as well as comparisons with other calculations, such as MCFM, MC@NLO, POWHEG, for single Z/? *-boson production at the LHC are presented.
Sampling uncertainty evaluation for data acquisition board based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Ge, Leyi; Wang, Zhongyu
2008-10-01
Evaluating the data acquisition board sampling uncertainty is a difficult problem in the field of signal sampling. This paper analyzes the sources of dada acquisition board sampling uncertainty in the first, then introduces a simulation theory of dada acquisition board sampling uncertainty evaluation based on Monte Carlo method and puts forward a relation model of sampling uncertainty results, sampling numbers and simulation times. In the case of different sample numbers and different signal scopes, the author establishes a random sampling uncertainty evaluation program of a PCI-6024E data acquisition board to execute the simulation. The results of the proposed Monte Carlo simulation method are in a good agreement with the GUM ones, and the validities of Monte Carlo method are represented.
A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha
ERIC Educational Resources Information Center
Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T.
2010-01-01
The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in
Comparison of the Monte Carlo adjoint-weighted and differential operator perturbation methods
Kiedrowski, Brian C; Brown, Forrest B
2010-01-01
Two perturbation theory methodologies are implemented for k-eigenvalue calculations in the continuous-energy Monte Carlo code, MCNP6. A comparison of the accuracy of these techniques, the differential operator and adjoint-weighted methods, is performed numerically and analytically. Typically, the adjoint-weighted method shows better performance over a larger range; however, there are exceptions.
Measuring Stellar Radial Velocity using Markov Chain Monte Carlo(MCMC) Method
NASA Astrophysics Data System (ADS)
Song, Yihan; Luo, Ali; Zhao, Yongheng
2014-01-01
Stellar radial velocity is estimated by using template fitting and Markov Chain Monte Carlo(MCMC) methods. This method works on the LAMOST stellar spectra. The MCMC simulation generates a probability distribution of the RV. The RV error can also computed from distribution.
Perfetti, Christopher M; Martin, William R; Rearden, Bradley T; Williams, Mark L
2012-01-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.
Improved methods of handling massive tallies in reactor Monte Carlo Code RMC
She, D.; Wang, K.; Sun, J.; Qiu, Y.
2013-07-01
Monte Carlo simulations containing a large number of tallies generally suffer severe performance penalties due to a significant amount of run time spent in searching for and scoring individual tally bins. This paper describes the improved methods of handling large numbers of tallies, which have been implemented in the RMC Monte Carlo code. The calculation results demonstrate that the proposed methods can considerably improve the tally performance when massive tallies are treated. In the calculated case with 6 million of tally regions, only 10% of run time is increased in each active cycle against each inactive cycle. (authors)
High-order path-integral Monte Carlo methods for solving quantum dot problems
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2015-03-01
The conventional second-order path-integral Monte Carlo method is plagued with the sign problem in solving many-fermion systems. This is due to the large number of antisymmetric free-fermion propagators that are needed to extract the ground state wave function at large imaginary time. In this work we show that optimized fourth-order path-integral Monte Carlo methods, which use no more than five free-fermion propagators, can yield accurate quantum dot energies for up to 20 polarized electrons with the use of the Hamiltonian energy estimator.
Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
McKinley, M S; Brooks III, E D; Daffin, F
2004-12-13
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.
Revised methods for few-group cross sections generation in the Serpent Monte Carlo code
Fridman, E.; Leppaenen, J.
2012-07-01
This paper presents new calculation methods, recently implemented in the Serpent Monte Carlo code, and related to the production of homogenized few-group constants for deterministic 3D core analysis. The new methods fall under three topics: 1) Improved treatment of neutron-multiplying scattering reactions, 2) Group constant generation in reflectors and other non-fissile regions and 3) Homogenization in leakage-corrected criticality spectrum. The methodology is demonstrated by a numerical example, comparing a deterministic nodal diffusion calculation using Serpent-generated cross sections to a reference full-core Monte Carlo simulation. It is concluded that the new methodology improves the results of the deterministic calculation, and paves the way for Monte Carlo based group constant generation. (authors)
NASA Astrophysics Data System (ADS)
Newell, Quentin Thomas
The Monte Carlo method provides powerful geometric modeling capabilities for large problem domains in 3-D; therefore, the Monte Carlo method is becoming popular for 3-D fuel depletion analyses to compute quantities of interest in spent nuclear fuel including isotopic compositions. The Monte Carlo approach has not been fully embraced due to unresolved issues concerning the effect of Monte Carlo uncertainties on the predicted results. Use of the Monte Carlo method to solve the neutron transport equation introduces stochastic uncertainty in the computed fluxes. These fluxes are used to collapse cross sections, estimate power distributions, and deplete the fuel within depletion calculations; therefore, the predicted number densities contain random uncertainties from the Monte Carlo solution. These uncertainties can be compounded in time because of the extrapolative nature of depletion and decay calculations. The objective of this research was to quantify the stochastic uncertainty propagation of the flux uncertainty, introduced by the Monte Carlo method, to the number densities for the different isotopes in spent nuclear fuel due to multiple depletion time steps. The research derived a formula that calculates the standard deviation in the nuclide number densities based on propagating the statistical uncertainty introduced when using coupled Monte Carlo depletion computer codes. The research was developed with the use of the TRITON/KENO sequence of the SCALE computer code. The linear uncertainty nuclide group approximation (LUNGA) method developed in this research approximated the variance of ?N term, which is the variance in the flux shape due to uncertainty in the calculated nuclide number densities. Three different example problems were used in this research to calculate of the standard deviation in the nuclide number densities using the LUNGA method. The example problems showed that the LUNGA method is capable of calculating the standard deviation of the nuclide number densities and kinf. Example 2 and Example 3 demonstrated a percent difference of much less than 1 percent between the LUNGA and the exact methods for calculating the standard deviation in the nuclide number densities. The LUNGA method was capable of calculating the variance of the ? N term in Example 2, but unfortunately not in Example 3. However, both Example 2 and 3 showed the contribution from the ?N term to the variance in the number densities is minute compared to the contribution from the ?S term and the variance and covariances of the number densities themselves. This research concluded with validation and verification of the LUNGA method. The research demonstrated that the LUNGA method and the statistics of 100 different Monte Carlo simulations agreed with 99 percent confidence in calculating the standard deviation in the number densities and kinf based on propagating the statistical uncertainty in the flux introduced by using the Monte Carlo method in coupled Monte Carlo depletion calculations.
Quantum-trajectory Monte Carlo method for study of electron-crystal interaction in STEM.
Ruan, Z; Zeng, R G; Ming, Y; Zhang, M; Da, B; Mao, S F; Ding, Z J
2015-07-21
In this paper, a novel quantum-trajectory Monte Carlo simulation method is developed to study electron beam interaction with a crystalline solid for application to electron microscopy and spectroscopy. The method combines the Bohmian quantum trajectory method, which treats electron elastic scattering and diffraction in a crystal, with a Monte Carlo sampling of electron inelastic scattering events along quantum trajectory paths. We study in this work the electron scattering and secondary electron generation process in crystals for a focused incident electron beam, leading to understanding of the imaging mechanism behind the atomic resolution secondary electron image that has been recently achieved in experiment with a scanning transmission electron microscope. According to this method, the Bohmian quantum trajectories have been calculated at first through a wave function obtained via a numerical solution of the time-dependent Schrödinger equation with a multislice method. The impact parameter-dependent inner-shell excitation cross section then enables the Monte Carlo sampling of ionization events produced by incident electron trajectories travelling along atom columns for excitation of high energy knock-on secondary electrons. Following cascade production, transportation and emission processes of true secondary electrons of very low energies are traced by a conventional Monte Carlo simulation method to present image signals. Comparison of the simulated image for a Si(110) crystal with the experimental image indicates that the dominant mechanism of atomic resolution of secondary electron image is the inner-shell ionization events generated by a high-energy electron beam. PMID:26082190
A multi-group Monte Carlo core analysis method and its application in SCWR design
Zhang, P.; Wang, K.; Yu, G.
2012-07-01
Complex geometry and spectrum have been the characteristics of many newly developed nuclear energy systems, so the suitability and precision of the traditional deterministic codes are doubtable while being applied to simulate these systems. On the contrary, the Monte Carlo method has the inherent advantages of dealing with complex geometry and spectrum. The main disadvantage of Monte Carlo method is that it takes long time to get reliable results, so the efficiency is too low for the ordinary core designs. A new Monte Carlo core analysis scheme is developed, aimed to increase the calculation efficiency. It is finished in two steps: Firstly, the assembly level simulation is performed by continuous energy Monte Carlo method, which is suitable for any geometry and spectrum configuration, and the assembly multi-group constants are tallied at the same time; Secondly, the core level calculation is performed by multi-group Monte Carlo method, using the assembly group constants generated in the first step. Compared with the heterogeneous Monte Carlo calculations of the whole core, this two-step scheme is more efficient, and the precision is acceptable for the preliminary analysis of novel nuclear systems. Using this core analysis scheme, a SCWR core was designed based on a new SCWR assembly design. The core output is about 1,100 MWe, and a cycle length of about 550 EFPDs can be achieved with 3-batch refueling pattern. The average and maximum discharge burn-up are about 53.5 and 60.9 MWD/kgU respectively. (authors)
The Simulation-Tabulation Method for Classical Diffusion Monte Carlo
NASA Astrophysics Data System (ADS)
Hwang, Chi-Ok; Given, James A.; Mascagni, Michael
2001-12-01
Many important classes of problems in materials science and biotechnology require the solution of the Laplace or Poisson equation in disordered two-phase domains in which the phase interface is extensive and convoluted. Green's function first-passage (GFFP) methods solve such problems efficiently by generalizing the walk on spheres (WOS) method to allow first-passage (FP) domains to be not just spheres but a wide variety of geometrical shapes. (In particular, this solves the difficulty of slow convergence with WOS by allowing FP domains that contain patches of the phase interface.) Previous studies accomplished this by using geometries for which the Green's function was available in quasi-analytic form. Here, we extend these studies by using the simulation-tabulation (ST) method. We simulate and then tabulate surface Green's functions that cannot be obtained analytically. The ST method is applied to the Solc-Stockmayer model with zero potential, to the mean trapping rate of a diffusing particle in a domain of nonoverlapping spherical traps, and to the effective conductivity for perfectly insulating, nonoverlapping spherical inclusions in a matrix of finite conductivity. In all cases, this class of algorithms provides the most efficient methods known to solve these problems to high accuracy.
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
ERIC Educational Resources Information Center
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
ERIC Educational Resources Information Center
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex
On performance measures for infinite swapping Monte Carlo methods.
Doll, J D; Dupuis, Paul
2015-01-14
We introduce and illustrate a number of performance measures for rare-event sampling methods. These measures are designed to be of use in a variety of expanded ensemble techniques including parallel tempering as well as infinite and partial infinite swapping approaches. Using a variety of selected applications, we address questions concerning the variation of sampling performance with respect to key computational ensemble parameters. PMID:25591342
Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C
2010-12-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water
ERIC Educational Resources Information Center
Gergely, John Robert
2009-01-01
Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"
Variance reduction methods applied to deep-penetration Monte Carlo problems
Cramer, S.N.; Tang, J.S.
1986-01-01
A review of standard variance reduction methods for deep-penetration Monte Carlo calculations is presented. Comparisons and contrasts are made with methods for nonpenetration and reactor core problems. Difficulties and limitations of the Monte Carlo method for deep-penetration calculations are discussed in terms of transport theory, statistical uncertainty and computing technology. Each aspect of a Monte Carlo code calculation is detailed, including the natural and biased forms of (1) the source description, (2) the transport process, (3) the collision process, and (4) the estimation process. General aspects of cross-section data use and geometry specification are also discussed. Adjoint calculations are examined in the context of both complete calculations and approximate calculations for use as importance functions for forward calculations. The idea of importance and the realization of the importance function are coverd in both general and mathematical terms. Various methods of adjoint importance generation and its implementation are covered, including the simultaneous generation of both forward and adjoint fluxes in one calculation. A review of the current literature on mathematical aspects of variance reduction and statistical uncertainty is given. Three widely used Monte Carlo codes MCNP, MORSE, and TRIPOLI - are compared and contrasted in connection with many of the specific items discussed throughout the presentation. 75 refs., 16 figs.
An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho
2001-01-01
Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions
Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water
ERIC Educational Resources Information Center
Gergely, John Robert
2009-01-01
Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"…
ERIC Educational Resources Information Center
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain
Monte-Carlo methods make Dempster-Shafer formalism feasible
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.; Bernat, Andrew; Borrett, Walter; Mariscal, Yvonne; Villa, Elsa
1991-01-01
One of the main obstacles to the applications of Dempster-Shafer formalism is its computational complexity. If we combine m different pieces of knowledge, then in general case we have to perform up to 2(sup m) computational steps, which for large m is infeasible. For several important cases algorithms with smaller running time were proposed. We prove, however, that if we want to compute the belief bel(Q) in any given query Q, then exponential time is inevitable. It is still inevitable, if we want to compute bel(Q) with given precision epsilon. This restriction corresponds to the natural idea that since initial masses are known only approximately, there is no sense in trying to compute bel(Q) precisely. A further idea is that there is always some doubt in the whole knowledge, so there is always a probability p(sub o) that the expert's knowledge is wrong. In view of that it is sufficient to have an algorithm that gives a correct answer a probability greater than 1-p(sub o). If we use the original Dempster's combination rule, this possibility diminishes the running time, but still leaves the problem infeasible in the general case. We show that for the alternative combination rules proposed by Smets and Yager feasible methods exist. We also show how these methods can be parallelized, and what parallelization model fits this problem best.
Linear scaling electronic structure Monte Carlo method for metals
NASA Astrophysics Data System (ADS)
Krajewski, Florian R.; Parrinello, Michele
2007-06-01
We present a method for sampling the Boltzmann distribution of a system in which the interionic interactions are derived from empirical or semiempirical electronic structure calculations within the Born-Oppenheimer approximation. We considerably improve on a scheme presented earlier [F. R. Krajewski and M. Parrinello, Phys. Rev. B 73, 041105(R) (2006)]. To this effect, we use an expression for the partition function in which electronic and ionic degrees of freedom are treated on the same footing. In addition, we introduce an auxiliary set of fields in such a way that the sampling of the partition function scales linearly with system size. We demonstrate the validity of this approach on tight-binding models of carbon nanotubes and silicon in its liquid and crystalline phases.
NASA Astrophysics Data System (ADS)
Nakamoto, Takamichi
Our group has studied an odor sensing system using an array of Quartz Crystal Microbalance (QCM) gas sensors and neural-network pattern recognition. In this odor sensing system, it is important to know the properties of sensing films coated on Quartz Crystal Microbalance electrodes. These sensing films have not been experimentally characterized well enough to predict the sensor response. We have investigated the predictions of sensor responses using a computational chemistry method, Grand Canonical Monte Carlo (GCMC) simulations. We have successfully predicted the amount of sorption using this method. The GCMC method requires no empirical parameters, unlike many other prediction methods used for QCM based sensor response modeling. In this chapter, the Grand Canonical Monte Carlo method is reviewed to predict the response of QCM gas sensor, and the modeling results are compared with experiments.
NASA Astrophysics Data System (ADS)
Kim, Minho; Lee, Hyounggun; Kim, Hyosim; Park, Hongmin; Lee, Wonho; Park, Sungho
2014-03-01
This study evaluated the Monte Carlo method for determining the dose calculation in fluoroscopy by using a realistic human phantom. The dose was calculated by using Monte Carlo N-particle extended (MCNPX) in simulations and was measured by using Korean Typical Man-2 (KTMAN-2) phantom in the experiments. MCNPX is a widely-used simulation tool based on the Monte-Carlo method and uses random sampling. KTMAN-2 is a virtual phantom written in MCNPX language and is based on the typical Korean man. This study was divided into two parts: simulations and experiments. In the former, the spectrum generation program (SRS-78) was used to obtain the output energy spectrum for fluoroscopy; then, each dose to the target organ was calculated using KTMAN-2 with MCNPX. In the latter part, the output of the fluoroscope was calibrated first and TLDs (Thermoluminescent dosimeter) were inserted in the ART (Alderson Radiation Therapy) phantom at the same places as in the simulation. Thus, the phantom was exposed to radiation, and the simulated and the experimental doses were compared. In order to change the simulation unit to the dose unit, we set the normalization factor (NF) for unit conversion. Comparing the simulated with the experimental results, we found most of the values to be similar, which proved the effectiveness of the Monte Carlo method in fluoroscopic dose evaluation. The equipment used in this study included a TLD, a TLD reader, an ART phantom, an ionization chamber and a fluoroscope.
Kim, Y.; Shim, H. J.; Noh, T.
2006-07-01
To resolve the double-heterogeneity (DH) problem resulting from the TRISO fuel of high-temperature gas-cooled reactors (HTGRs), a synergistic combination of a deterministic method and the Monte Carlo method has been proposed. As the deterministic approach, the RPT (Reactivity-equivalent Physical Transformation) method is adopted. In the combined methodology, a reference k-infinite value is obtained by the Monte Carlo method for an initial state of a problem and it is used by the RPT method to transform the original DH problem into a conventional single-heterogeneous one, and the transformed problem is analyzed by the conventional deterministic methods. The combined methodology has been applied to the depletion analysis of typical HTGR fuels including both prismatic block and pebble. The reference solution is obtained using a Monte Carlo code MCCARD and the accuracy of the deterministic-only and the combined methods is evaluated. For the deterministic solution, the DRAGON and HELIOS codes were used. It has been shown that the combined method provides an accurate solution although the deterministic-only solution shows noticeable errors. For the pebble, the two deterministic codes cannot handle the DH problem. Nevertheless, we have shown that the solution of the DRAGON-MCCARD combined approach agrees well with the reference. (authors)
NASA Astrophysics Data System (ADS)
Dixon, D. A.; Prinja, A. K.; Franke, B. C.
2015-09-01
This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.
A high-order photon Monte Carlo method for radiative transfer in direct numerical simulation
Wu, Y.; Modest, M.F.; Haworth, D.C. . E-mail: dch12@psu.edu
2007-05-01
A high-order photon Monte Carlo method is developed to solve the radiative transfer equation. The statistical and discretization errors of the computed radiative heat flux and radiation source term are isolated and quantified. Up to sixth-order spatial accuracy is demonstrated for the radiative heat flux, and up to fourth-order accuracy for the radiation source term. This demonstrates the compatibility of the method with high-fidelity direct numerical simulation (DNS) for chemically reacting flows. The method is applied to address radiative heat transfer in a one-dimensional laminar premixed flame and a statistically one-dimensional turbulent premixed flame. Modifications of the flame structure with radiation are noted in both cases, and the effects of turbulence/radiation interactions on the local reaction zone structure are revealed for the turbulent flame. Computational issues in using a photon Monte Carlo method for DNS of turbulent reacting flows are discussed.
A Monte Carlo implementation of the predictor-corrector Quasi-Static method
Hackemack, M. W.; Ragusa, J. C.; Griesheimer, D. P.; Pounders, J. M.
2013-07-01
The Quasi-Static method (QS) is a useful tool for solving reactor transients since it allows for larger time steps when updating neutron distributions. Because of the beneficial attributes of Monte Carlo (MC) methods (exact geometries and continuous energy treatment), it is desirable to develop a MC implementation for the QS method. In this work, the latest version of the QS method known as the Predictor-Corrector Quasi-Static method is implemented. Experiments utilizing two energy-groups provide results that show good agreement with analytical and reference solutions. The method as presented can easily be implemented in any continuous energy, arbitrary geometry, MC code. (authors)
Estimation of magnetocaloric properties by using Monte Carlo method for AMRR cycle
NASA Astrophysics Data System (ADS)
Arai, R.; Tamura, R.; Fukuda, H.; Li, J.; Saito, A. T.; Kaji, S.; Nakagome, H.; Numazawa, T.
2015-12-01
In order to achieve a wide refrigerating temperature range in magnetic refrigeration, it is effective to layer multiple materials with different Curie temperatures. It is crucial to have a detailed understanding of physical properties of materials to optimize the material selection and the layered structure. In the present study, we discuss methods for estimating a change in physical properties, particularly the Curie temperature when some of the Gd atoms are substituted for non-magnetic elements for material design, based on Gd as a ferromagnetic material which is a typical magnetocaloric material. For this purpose, whilst making calculations using the S=7/2 Ising model and the Monte Carlo method, we made a specific heat measurement and a magnetization measurement of Gd-R alloy (R = Y, Zr) to compare experimental values and calculated ones. The results showed that the magnetic entropy change, specific heat, and Curie temperature can be estimated with good accuracy using the Monte Carlo method.
A Monte Carlo method for solving the one-dimensional telegraph equations with boundary conditions
NASA Astrophysics Data System (ADS)
Acebrón, Juan A.; Ribeiro, Marco A.
2016-01-01
A Monte Carlo algorithm is derived to solve the one-dimensional telegraph equations in a bounded domain subject to resistive and non-resistive boundary conditions. The proposed numerical scheme is more efficient than the classical Kac's theory because it does not require the discretization of time. The algorithm has been validated by comparing the results obtained with theory and the Finite-difference time domain (FDTD) method for a typical two-wire transmission line terminated at both ends with general boundary conditions. We have also tested transmission line heterogeneities to account for wave propagation in multiple media. The algorithm is inherently parallel, since it is based on Monte Carlo simulations, and does not suffer from the numerical dispersion and dissipation issues that arise in finite difference-based numerical schemes on a lossy medium. This allowed us to develop an efficient numerical method, capable of outperforming the classical FDTD method for large scale problems and high frequency signals.
Perfetti, C.; Martin, W.; Rearden, B.; Williams, M.
2012-07-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)
Monte Carlo method of radiative transfer applied to a turbulent flame modeling with LES
NASA Astrophysics Data System (ADS)
Zhang, Jin; Gicquel, Olivier; Veynante, Denis; Taine, Jean
2009-06-01
Radiative transfer plays an important role in the numerical simulation of turbulent combustion. However, for the reason that combustion and radiation are characterized by different time scales and different spatial and chemical treatments, the radiation effect is often neglected or roughly modelled. The coupling of a large eddy simulation combustion solver and a radiation solver through a dedicated language, CORBA, is investigated. Two formulations of Monte Carlo method (Forward Method and Emission Reciprocity Method) employed to resolve RTE have been compared in a one-dimensional flame test case using three-dimensional calculation grids with absorbing and emitting media in order to validate the Monte Carlo radiative solver and to choose the most efficient model for coupling. Then the results obtained using two different RTE solvers (Reciprocity Monte Carlo method and Discrete Ordinate Method) applied on a three-dimensional flame holder set-up with a correlated-k distribution model describing the real gas medium spectral radiative properties are compared not only in terms of the physical behavior of the flame, but also in computational performance (storage requirement, CPU time and parallelization efficiency). To cite this article: J. Zhang et al., C. R. Mecanique 337 (2009).
On a Monte Carlo method for measurement uncertainty evaluation and its implementation
NASA Astrophysics Data System (ADS)
Harris, P. M.; Cox, M. G.
2014-08-01
The Guide to the Expression of Uncertainty in Measurement (GUM) provides a framework and procedure for evaluating and expressing measurement uncertainty. The procedure has two main limitations. Firstly, the way a coverage interval is constructed to contain values of the measurand with a stipulated coverage probability is approximate. Secondly, insufficient guidance is given for the multivariate case in which there is more than one measurand. In order to address these limitations, two specific guidance documents (or Supplements to the GUM) on, respectively, a Monte Carlo method for uncertainty evaluation (Supplement 1) and extensions to any number of measurands (Supplement 2) have been published. A further document on developing and using measurement models in the context of uncertainty evaluation (Supplement 3) is also planned, but not considered in this paper. An overview is given of these guidance documents. In particular, a Monte Carlo method, which is the focus of Supplements 1 and 2, is described as a numerical approach to implement the propagation of distributions formulated using the change of variables formula. Although applying a Monte Carlo method is conceptually straightforward, some of the practical aspects of using the method are considered, such as the choice of the number of trials and ensuring an implementation is memory-efficient. General comments about the implications of using the method in measurement and calibration services, such as the need to achieve transferability of measurement results, are made.
A comparison of generalized hybrid Monte Carlo methods with and without momentum flip
Akhmatskaya, Elena; Bou-Rabee, Nawaf; Reich, Sebastian
2009-04-01
The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.
A comparison of the Monte Carlo and the flux gradient method for atmospheric diffusion
Lange, R.
1990-05-01
In order to model the dispersal of atmospheric pollutants in the planetary boundary layer, various methods of parameterizing turbulent diffusion have been employed. The purpose of this paper is to use a three-dimensional particle-in-cell transport and diffusion model to compare the Markov chain (Monte Carlo) method of statistical particle diffusion with the deterministic flux gradient (K-theory) method. The two methods are heavily used in the study of atmospheric diffusion under complex conditions, with the Monte Carlo method gaining in popularity partly because of its more direct application of turbulence parameters. The basis of comparison is a data set from night-time drainage flow tracer experiments performed by the US Department of Energy Atmospheric Studies in Complex Terrain (ASCOT) program at the Geysers geothermal region in northern California. The Atmospheric Diffusion Particle-In-Cell (ADPIC) model used is the main model in the Lawrence Livermore National Laboratory emergency response program: Atmospheric Release Advisory Capability (ARAC). As a particle model, it can simulate diffusion in both the flux gradient and Monte Carlo modes. 9 refs., 6 figs.
Lin, H; Wu, D S; Wu, A D
2004-12-01
Surface, build-up and depth dose characteristics of a monoenergetic electron point source simulated by Monte Carlo code MCNP4c for varying field size and SSD are extensively studied in this paper. MCNP4c (Monte Carlo N-Particle Transport Code System) has been extensively used in clinical dose simulation for its versatility and powerful geometrical coding tool. A sharp increase in PDD is seen with the Monte Carlo Modelling immediately at the surface within the first 0.2 mm. This effect cannot be easily measured by experimental instruments for electron contamination, and may lead to a clinical underdosing of the basal cell layer, which is one of the most radiation sensitive layers and the main target for skin carcinogenesis. A high percentage build-up dose for electron irradiation was shown. No significant effects in surface PDDs were modelled with different SSD values from 95 cm to 125 cm. Three depths were studied in detail, these being 0.05 mm, the lower depth of the basal cell layer; 0.95 mm, the lower depth of the dermal layer; and 0.95 cm, a position within the subcutaneous tissue. Results showed only small surface PDD differences were modelled for SSD variations from 95 cm to 125 cm and field sizes variation from the values between 5 cm and 10 cm squares to 25 cm. When the field side length is smaller than this, the surface dose shows an increasing trend by about 7% at 5 x 5 cm2. PMID:15712590
Monte Carlo method for photon heating using temperature-dependent optical properties.
Slade, Adam Broadbent; Aguilar, Guillermo
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. PMID:25488656
Computing the principal eigenelements of some linear operators using a branching Monte Carlo method
Lejay, Antoine Maire, Sylvain
2008-12-01
In earlier work, we developed a Monte Carlo method to compute the principal eigenvalue of linear operators, which was based on the simulation of exit times. In this paper, we generalize this approach by showing how to use a branching method to improve the efficacy of simulating large exit times for the purpose of computing eigenvalues. Furthermore, we show that this new method provides a natural estimation of the first eigenfunction of the adjoint operator. Numerical examples of this method are given for the Laplace operator and an homogeneous neutron transport operator.
High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2015-03-01
In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.
Path-integral Monte Carlo method for the local Z2 Berry phase.
Motoyama, Yuichi; Todo, Synge
2013-02-01
We present a loop cluster algorithm Monte Carlo method for calculating the local Z(2) Berry phase of the quantum spin models. The Berry connection, which is given as the inner product of two ground states with different local twist angles, is expressed as a Monte Carlo average on the worldlines with fixed spin configurations at the imaginary-time boundaries. The "complex weight problem" caused by the local twist is solved by adopting the meron cluster algorithm. We present the results of simulation on the antiferromagnetic Heisenberg model on an out-of-phase bond-alternating ladder to demonstrate that our method successfully detects the change in the valence bond pattern at the quantum phase transition point. We also propose that the gauge-fixed local Berry connection can be an effective tool to estimate precisely the quantum critical point. PMID:23496453
A step beyond the Monte Carlo method in economics: Application of multivariate normal distribution
NASA Astrophysics Data System (ADS)
Kabaivanov, S.; Malechkova, A.; Marchev, A.; Milev, M.; Markovska, V.; Nikolova, K.
2015-11-01
In this paper we discuss the numerical algorithm of Milev-Tagliani [25] used for pricing of discrete double barrier options. The problem can be reduced to accurate valuation of an n-dimensional path integral with probability density function of a multivariate normal distribution. The efficient solution of this problem with the Milev-Tagliani algorithm is a step beyond the classical application of Monte Carlo for option pricing. We explore continuous and discrete monitoring of asset path pricing, compare the error of frequently applied quantitative methods such as the Monte Carlo method and finally analyze the accuracy of the Milev-Tagliani algorithm by presenting the profound research and important results of Honga, S. Leeb and T. Li [16].
Quantum World-line Monte Carlo Method with Non-binary Loops and Its Application
NASA Astrophysics Data System (ADS)
Harada, K.
A quantum world-line Monte Carlo method for high-symmetrical quantum models is proposed. Firstly, based on a representation of a partition function using the Matsubara formula, the principle of quantum world-line Monte Carlo methods is briefly outlined and a new algorithm using non-binary loops is given for quantum models with high symmetry as SU(N). The algorithm is called non-binary loop algorithm because of non-binary loop updatings. Secondary, one example of our numerical studies using the non-binary loop updating is shown. It is the problem of the ground state of two-dimensional SU(N) anti-ferromagnets. Our numerical study confirms that the ground state in the small N(? 4) case is a magnetic ordered Neel state, but the one in the large N(? 5) case has no magnetic order, and it becomes a dimer state.
NASA Astrophysics Data System (ADS)
Jing, Hui; Li, Cong; Kuang, Bing; Huang, Meifa; Zhong, Yanru
2012-09-01
The way of measuring diameter by use of measuring bow height and chord length is commonly adopted for the large diameter work piece. In the process of computing the diameter of large work piece, measurement uncertainty is an important parameter and is always employed to evaluate the reliability of the measurement results. Therefore, it is essential to present reliable methods to evaluate the measurement uncertainty, especially in precise measurement. Because of the limitations of low convergence and unstable results of the Monte-Carlo (MC) method, the quasi-Monte-Carlo (QMC) method is used to estimate the measurement uncertainty. The QMC method is an improvement of the ordinary MC method which employs highly uniform quasi random numbers to replace MC's pseudo random numbers. In the process of evaluation, first, more homogeneous random numbers (quasi random numbers) are generated based on Halton's sequence. Then these random numbers are transformed into the desired distribution random numbers. The desired distribution random numbers are used to simulate the measurement errors. By computing the simulation results, measurement uncertainty can be obtained. An experiment of cylinder diameter measurement and its uncertainty evaluation are given. In the experiment, the guide to the expression of uncertainty in measurement method, MC method, and QMC method are validated. The result shows that the QMC method has a higher convergence rate and more stable evaluation results than that of the MC method. Therefore, the QMC method can be applied effectively to evaluate the measurement uncertainty.
Monte Carlo Methods in Materials Science Based on FLUKA and ROOT
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor
2003-01-01
A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the CERN ALICE (A Large Ion Collisions Experiment) software team through an adaptation of their existing AliROOT (ALICE Using ROOT) architecture. In order to check our progress against actual data, we have chosen to simulate the ATIC14 (Advanced Thin Ionization Calorimeter) cosmic-ray astrophysics balloon payload as well as neutron fluences in the Mir spacecraft. This paper contains a summary of status of this project, and a roadmap to its successful completion.
Methods for Monte Carlo simulation of the exospheres of the moon and Mercury
NASA Technical Reports Server (NTRS)
Hodges, R. R., Jr.
1980-01-01
A general form of the integral equation of exospheric transport on moon-like bodies is derived in a form that permits arbitrary specification of time varying physical processes affecting atom creation and annihilation, atom-regolith collisions, adsorption and desorption, and nonplanetocentric acceleration. Because these processes usually defy analytic representation, the Monte Carlo method of solution of the transport equation, the only viable alternative, is described in detail, with separate discussions of the methods of specification of physical processes as probabalistic functions. Proof of the validity of the Monte Carlo exosphere simulation method is provided in the form of a comparison of analytic and Monte Carlo solutions to three classical, and analytically tractable, exosphere problems. One of the key phenomena in moonlike exosphere simulations, the distribution of velocities of the atoms leaving a regolith, depends mainly on the nature of collisions of free atoms with rocks. It is shown that on the moon and Mercury, elastic collisions of helium atoms with a Maxwellian distribution of vibrating, bound atoms produce a nearly Maxwellian distribution of helium velocities, despite the absence of speeds in excess of escape in the impinging helium velocity distribution.
Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy
NASA Astrophysics Data System (ADS)
Fredriksson, Ingemar; Larsson, Marcus; Strmberg, Tomas
2012-04-01
Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.
Lattice-switching Monte Carlo method for crystals of flexible molecules
NASA Astrophysics Data System (ADS)
Bridgwater, Sally; Quigley, David
2014-12-01
We discuss implementation of the lattice-switching Monte Carlo method (LSMC) as a binary sampling between two synchronized Markov chains exploring separated minima in the potential energy landscape. When expressed in this fashion, the generalization to more complex crystals is straightforward. We extend the LSMC method to a flexible model of linear alkanes, incorporating bond length and angle constraints. Within this model, we accurately locate a transition between two polymorphs of n -butane with increasing density, and suggest this as a benchmark problem for other free-energy methods.
NASA Astrophysics Data System (ADS)
Ebru Ermis, Elif; Celiktas, Cuneyt
2015-07-01
Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals) were carried out by means of FLUKA Monte Carlo (MC) method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST) values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.
NASA Astrophysics Data System (ADS)
Zhong, Zhaopeng; Talamo, Alberto; Gohar, Yousry
2013-07-01
The effective delayed neutron fraction ? plays an important role in kinetics and static analysis of the reactor physics experiments. It is used as reactivity unit referred to as "dollar". Usually, it is obtained by computer simulation due to the difficulty in measuring it experimentally. In 1965, Keepin proposed a method, widely used in the literature, for the calculation of the effective delayed neutron fraction ?. This method requires calculation of the adjoint neutron flux as a weighting function of the phase space inner products and is easy to implement by deterministic codes. With Monte Carlo codes, the solution of the adjoint neutron transport equation is much more difficult because of the continuous-energy treatment of nuclear data. Consequently, alternative methods, which do not require the explicit calculation of the adjoint neutron flux, have been proposed. In 1997, Bretscher introduced the k-ratio method for calculating the effective delayed neutron fraction; this method is based on calculating the multiplication factor of a nuclear reactor core with and without the contribution of delayed neutrons. The multiplication factor set by the delayed neutrons (the delayed multiplication factor) is obtained as the difference between the total and the prompt multiplication factors. Using Monte Carlo calculation Bretscher evaluated the ? as the ratio between the delayed and total multiplication factors (therefore the method is often referred to as the k-ratio method). In the present work, the k-ratio method is applied by Monte Carlo (MCNPX) and deterministic (PARTISN) codes. In the latter case, the ENDF/B nuclear data library of the fuel isotopes (235U and 238U) has been processed by the NJOY code with and without the delayed neutron data to prepare multi-group WIMSD neutron libraries for the lattice physics code DRAGON, which was used to generate the PARTISN macroscopic cross sections. In recent years Meulekamp and van der Marck in 2006 and Nauchi and Kameyama in 2005 proposed new methods for the effective delayed neutron fraction calculation with only one Monte Carlo computer simulation, compared with the k-ratio method which require two criticality calculations. In this paper, the Meulekamp/Marck and Nauchi/Kameyama methods are applied for the first time by the MCNPX computer code and the results obtained by all different methods are compared.
Isospin-projected nuclear level densities by the shell model Monte Carlo method
Nakada, H.; Alhassid, Y.
2008-11-15
We have developed an efficient isospin projection method in the shell model Monte Carlo approach for isospin-conserving Hamiltonians. For isoscalar observables this method has the advantage of being exact sample by sample. It allows us to take into account the proper isospin dependence of the nuclear interaction, thus avoiding a sign problem that such an interaction introduces in unprojected calculations. We apply the method to calculate the isospin dependence of level densities in the complete pf+g{sub 9/2} shell. We find that isospin-dependent corrections to the total level density are particularly important for N{approx}Z nuclei.
NASA Astrophysics Data System (ADS)
Plotnikov, M. Yu.; Shkarupa, E. V.
2015-11-01
Presently, the direct simulation Monte Carlo (DSMC) method is widely used for solving rarefied gas dynamics problems. As applied to steady-state problems, a feature of this method is the use of dependent sample values of random variables for the calculation of macroparameters of gas flows. A new combined approach to estimating the statistical error of the method is proposed that does not practically require additional computations, and it is applicable for any degree of probabilistic dependence of sample values. Features of the proposed approach are analyzed theoretically and numerically. The approach is tested using the classical Fourier problem and the problem of supersonic flow of rarefied gas through permeable obstacle.
Quasi-Monte Carlo methods for lattice systems: A first look
NASA Astrophysics Data System (ADS)
Jansen, K.; Leovey, H.; Ammon, A.; Griewank, A.; Mller-Preussker, M.
2014-03-01
We investigate the applicability of quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this behavior for certain problems to N-1, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling. Catalogue identifier: AERJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERJ_v1_0.html Program obtainable from: CPC Program Library, Queens University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence version 3 No. of lines in distributed program, including test data, etc.: 67759 No. of bytes in distributed program, including test data, etc.: 2165365 Distribution format: tar.gz Programming language: C and C++. Computer: PC. Operating system: Tested on GNU/Linux, should be portable to other operating systems with minimal efforts. Has the code been vectorized or parallelized?: No RAM: The memory usage directly scales with the number of samples and dimensions: Bytes used = number of samples number of dimensions 8 Bytes (double precision). Classification: 4.13, 11.5, 23. External routines: FFTW 3 library (http://www.fftw.org) Nature of problem: Certain physical models formulated as a quantum field theory through the Feynman path integral, such as quantum chromodynamics, require a non-perturbative treatment of the path integral. The only known approach that achieves this is the lattice regularization. In this formulation the path integral is discretized to a finite, but very high dimensional integral. So far only Monte Carlo, and especially Markov chain-Monte Carlo methods like the Metropolis or the hybrid Monte Carlo algorithm have been used to calculate approximate solutions of the path integral. These algorithms often lead to the undesired effect of autocorrelation in the samples of observables and suffer in any case from the slow asymptotic error behavior proportional to N, if N is the number of samples. Solution method: This program applies the quasi-Monte Carlo approach and the reweighting technique (respectively the weighted uniform sampling method) to generate uncorrelated samples of observables of the anharmonic oscillator with an improved asymptotic error behavior. Unusual features: The application of the quasi-Monte Carlo approach is quite revolutionary in the field of lattice field theories. Running time: The running time depends directly on the number of samples N and dimensions d. On modern computers a run with up to N=216=65536 (including 9 replica runs) and d=100 should not take much longer than one minute.
Prediction of excited state energies for molecular nitrogen using quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Fayton, Floyd A., Jr.; Gibson, Ainsley A.; Harkless, John A. W.
Using the quantum Monte Carlo (QMC) method, we estimated electronic excitation energies for four low-lying Frank-Condon states of dinitrogen. QMC trial function forms were examined, with single and multideterminant wave functions derived from configuration interaction (CI) and complete active space self-consistent field theory (CASSCF) calculations. Variational Monte Carlo (VMC) and fixed-node diffusion Monte Carlo (FN-DMC) results compare favorably with TDHF, MRDCI, TDDFT, MR-CCSD, EOM-CCSD, and CASSCF which demonstrate the accuracy of QMC for these excited electronic states. The CASSCF constructed trial functions for QMC (QMC-CAS) and MR-CCSD deviate the least from the experimental values presented. Mean absolute deviation (MAD), or the difference between the calculated result and the experimental value divided by the total number of calculated results, is provided for each method. The least favorable MAD is obtained by the multideterminant and single determinant CISD trial function used for VMC calculations at 1.67 and 1.32, respectively. The lowest three MAD's are provided by the multideterminant CASSCF-VMC (0.23), the multideterminant CASSCF-DMC (0.14), and the MR-CCSD (0.13) calculation.
Visual improvement for bad handwriting based on Monte-Carlo method
NASA Astrophysics Data System (ADS)
Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua
2014-03-01
A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.
Beyond the Born-Oppenheimer approximation with quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Tubman, Norm M.; Kylnp, Ilkka; Hammes-Schiffer, Sharon; Ceperley, David M.
2014-10-01
In this work we develop tools that enable the study of nonadiabatic effects with variational and diffusion Monte Carlo methods. We introduce a highly accurate wave-function ansatz for electron-ion systems that can involve a combination of both clamped ions and quantum nuclei. We explicitly calculate the ground-state energies of H2, LiH, H2O, and FHF- using fixed-node quantum Monte Carlo with wave-function nodes that explicitly depend on the ion positions. The obtained energies implicitly include the effects arising from quantum nuclei and electron-nucleus coupling. We compare our results to the best theoretical and experimental results available and find excellent agreement.
Analysis of single Monte Carlo methods for prediction of reflectance from turbid media
Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan
2011-01-01
Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904
Analysis of single Monte Carlo methods for prediction of reflectance from turbid media.
Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan
2011-09-26
Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904
Bahreyni Toossi, Mohammad Taghi; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Meigooni, Ali Soleimani
2012-01-01
Background Dosimetric characteristics of a high dose rate (HDR) GZP6 Co-60 brachytherapy source have been evaluated following American Association of Physicists in MedicineTask Group 43U1 (AAPM TG-43U1) recommendations for their clinical applications. Materials and methods MCNP-4C and MCNPX Monte Carlo codes were utilized to calculate dose rate constant, two dimensional (2D) dose distribution, radial dose function and 2D anisotropy function of the source. These parameters of this source are compared with the available data for Ralstron 60Co and microSelectron192Ir sources. Besides, a superimposition method was developed to extend the obtained results for the GZP6 source No. 3 to other GZP6 sources. Results The simulated value for dose rate constant for GZP6 source was 1.104±0.03 cGyh-1U-1. The graphical and tabulated radial dose function and 2D anisotropy function of this source are presented here. The results of these investigations show that the dosimetric parameters of GZP6 source are comparable to those for the Ralstron source. While dose rate constant for the two 60Co sources are similar to that for the microSelectron192Ir source, there are differences between radial dose function and anisotropy functions. Radial dose function of the 192Ir source is less steep than both 60Co source models. In addition, the 60Co sources are showing more isotropic dose distribution than the 192Ir source. Conclusions The superimposition method is applicable to produce dose distributions for other source arrangements from the dose distribution of a single source. The calculated dosimetric quantities of this new source can be introduced as input data to the GZP6 treatment planning system (TPS) and to validate the performance of the TPS. PMID:23077455
Temperature-extrapolation method for Implicit Monte Carlo - Radiation hydrodynamics calculations
McClarren, R. G.; Urbatsch, T. J.
2013-07-01
We present a method for implementing temperature extrapolation in Implicit Monte Carlo solutions to radiation hydrodynamics problems. The method is based on a BDF-2 type integration to estimate a change in material temperature over a time step. We present results for radiation only problems in an infinite medium and for a 2-D Cartesian hohlraum problem. Additionally, radiation hydrodynamics simulations are presented for an RZ hohlraum problem and a related 3D problem. Our results indicate that improvements in noise and general behavior are possible. We present considerations for future investigations and implementations. (authors)
NASA Astrophysics Data System (ADS)
Sharma, Anupam; Long, Lyle N.
2004-10-01
A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented.
Markov chain Monte Carlo methods for statistical analysis of RF photonic devices.
Piels, Molly; Zibar, Darko
2016-02-01
The microwave reflection coefficient is commonly used to characterize the impedance of high-speed optoelectronic devices. Error and uncertainty in equivalent circuit parameters measured using this data are systematically evaluated. The commonly used nonlinear least-squares method for estimating uncertainty is shown to give unsatisfactory and incorrect results due to the nonlinear relationship between the circuit parameters and the measured data. Markov chain Monte Carlo methods are shown to provide superior results, both for individual devices and for assessing within-die variation. PMID:26906783
A high order method for orbital conjunctions analysis: Monte Carlo collision probability computation
NASA Astrophysics Data System (ADS)
Morselli, Alessandro; Armellin, Roberto; Di Lizia, Pierluigi; Bernelli Zazzera, Franco
2015-01-01
Three methods for the computation of the probability of collision between two space objects are presented. These methods are based on the high order Taylor expansion of the time of closest approach (TCA) and distance of closest approach (DCA) of the two orbiting objects with respect to their initial conditions. The identification of close approaches is first addressed using the nominal objects states. When a close approach is identified, the dependence of the TCA and DCA on the uncertainties in the initial states is efficiently computed with differential algebra (DA) techniques. In the first method the collision probability is estimated via fast DA-based Monte Carlo simulation, in which, for each pair of virtual objects, the DCA is obtained via the fast evaluation of its Taylor expansion. The second and the third methods are the DA version of Line Sampling and Subset Simulation algorithms, respectively. These are introduced to further improve the efficiency and accuracy of Monte Carlo collision probability computation, in particular for cases of very low collision probabilities. The performances of the methods are assessed on orbital conjunctions occurring in different orbital regimes and dynamical models. The probabilities obtained and the associated computational times are compared against standard (i.e. not DA-based) version of the algorithms and analytical methods. The dependence of the collision probability on the initial orbital state covariance is investigated as well.
A method based on Monte Carlo simulation for the determination of the G(E) function.
Chen, Wei; Feng, Tiancheng; Liu, Jun; Su, Chuanying; Tian, Yanjie
2015-02-01
The G(E) function method is a spectrometric method for the exposure dose estimation; this paper describes a method based on Monte Carlo method to determine the G(E) function of a 4? 4? 16? NaI(Tl) detector. Simulated spectrums of various monoenergetic gamma rays in the region of 40 -3200 keV and the corresponding deposited energy in an air ball in the energy region of full-energy peak were obtained using Monte Carlo N-particle Transport Code. Absorbed dose rate in air was obtained according to the deposited energy and divided by counts of corresponding full-energy peak to get the G(E) function value at energy E in spectra. Curve-fitting software 1st0pt was used to determine coefficients of the G(E) function. Experimental results show that the calculated dose rates using the G(E) function determined by the authors' method are accordant well with those values obtained by ionisation chamber, with a maximum deviation of 6.31 %. PMID:24795395
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Webster, C.
2014-12-01
The rational management of oil and gas reservoir requires an understanding of its response to existing and planned schemes of exploitation and operation. Such understanding requires analyzing and quantifying the influence of the subsurface uncertainties on predictions of oil and gas production. As the subsurface properties are typically heterogeneous causing a large number of model parameters, the dimension independent Monte Carlo (MC) method is usually used for uncertainty quantification (UQ). Recently, multilevel Monte Carlo (MLMC) methods were proposed, as a variance reduction technique, in order to improve computational efficiency of MC methods in UQ. In this effort, we propose a new acceleration approach for MLMC method to further reduce the total computational cost by exploiting model hierarchies. Specifically, for each model simulation on a new added level of MLMC, we take advantage of the approximation of the model outputs constructed based on simulations on previous levels to provide better initial states of new simulations, which will help improve efficiency by, e.g. reducing the number of iterations in linear system solving or the number of needed time-steps. This is achieved by using mesh-free interpolation methods, such as Shepard interpolation and radial basis approximation. Our approach is applied to a highly heterogeneous reservoir model from the tenth SPE project. The results indicate that the accelerated MLMC can achieve the same accuracy as standard MLMC with a significantly reduced cost.
Mcclarren, Ryan G; Urbatsch, Todd J
2008-01-01
In this note we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as {Delta}t{yields}{infinity} as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method alleviates both the nonphysical overheating that occurs in standard IMC when the time step is large and significantly diminishes the statistical noise in the solution.
Parsons, Tom
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Parsons, T.
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Inverse trishear modeling of bedding dip data using Markov chain Monte Carlo methods
NASA Astrophysics Data System (ADS)
Oakley, David O. S.; Fisher, Donald M.
2015-11-01
We present a method for fitting trishear models to surface profile data, by restoring bedding dip data and inverting for model parameters using a Markov chain Monte Carlo method. Trishear is a widely-used kinematic model for fault-propagation folds. It lacks an analytic solution, but a variety of data inversion techniques can be used to fit trishear models to data. Where the geometry of an entire folded bed is known, models can be tested by restoring the bed to its pre-folding orientation. When data include bedding attitudes, however, previous approaches have relied on computationally-intensive forward modeling. This paper presents an equation for the rate of change of dip in the trishear zone, which can be used to restore dips directly to their pre-folding values. The resulting error can be used to calculate a probability for each model, which allows solution by Markov chain Monte Carlo methods and inversion of datasets that combine dips and contact locations. These methods are tested using synthetic and real datasets. Results are used to approximate multimodal probability density functions and to estimate uncertainty in model parameters. The relative value of dips and contacts in constraining parameters and the effects of uncertainty in the data are investigated.
Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Lan, Shiwei; Bui-Thanh, Tan; Christie, Mike; Girolami, Mark
2016-03-01
The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows have been introduced to more effectively locally explore the configuration space of the posterior measure. However, obtaining such geometric quantities usually requires extensive computational effort and despite their effectiveness affects the applicability of these geometrically-based Monte Carlo methods. In this paper we explore one way to address this issue by the construction of an emulator of the model from which all geometric objects can be obtained in a much more computationally feasible manner. The main concept is to approximate the geometric quantities using a Gaussian Process emulator which is conditioned on a carefully chosen design set of configuration points, which also determines the quality of the emulator. To this end we propose the use of statistical experiment design methods to refine a potentially arbitrarily initialized design online without destroying the convergence of the resulting Markov chain to the desired invariant measure. The practical examples considered in this paper provide a demonstration of the significant improvement possible in terms of computational loading suggesting this is a promising avenue of further development.
Green, P. L.; Worden, K.
2015-01-01
In this paper, the authors outline the general principles behind an approach to Bayesian system identification and highlight the benefits of adopting a Bayesian framework when attempting to identify models of nonlinear dynamical systems in the presence of uncertainty. It is then described how, through a summary of some key algorithms, many of the potential difficulties associated with a Bayesian approach can be overcome through the use of Markov chain Monte Carlo (MCMC) methods. The paper concludes with a case study, where an MCMC algorithm is used to facilitate the Bayesian system identification of a nonlinear dynamical system from experimentally observed acceleration time histories. PMID:26303916
A numerical study of rays in random media. [Monte Carlo method simulation
NASA Technical Reports Server (NTRS)
Youakim, M. Y.; Liu, C. H.; Yeh, K. C.
1973-01-01
Statistics of electromagnetic rays in a random medium are studied numerically by the Monte Carlo method. Two dimensional random surfaces with prescribed correlation functions are used to simulate the random media. Rays are then traced in these sample media. Statistics of the ray properties such as the ray positions and directions are computed. Histograms showing the distributions of the ray positions and directions at different points along the ray path as well as at given points in space are given. The numerical experiment is repeated for different cases corresponding to weakly and strongly random media with isotropic and anisotropic irregularities. Results are compared with those derived from theoretical investigations whenever possible.
Refinement of overlapping local/global iteration method based on Monte Carlo/p-CMFD calculations
Jo, Y.; Yun, S.; Cho, N. Z.
2013-07-01
In this paper, the overlapping local/global (OLG) iteration method based on Monte Carlo/p-CMFD calculations is refined in two aspects. One is the consistent use of estimators to generate homogenized scattering cross sections. Another is that the incident or exiting angular interval is divided into multi-angular bins to modulate albedo boundary conditions for local problems. Numerical tests show that, compared to the one angle bin case in a previous study, the four angle bin case shows significantly improved results. (authors)
Stability of Staggered Flux State in d-p Model Studied Using Variational Monte Carlo Method
NASA Astrophysics Data System (ADS)
Tamura, Shun; Yokoyama, Hisatoshi
The stability of a staggered flux (SF) or d-density wave state is studied in a d-p model using a variational Monte Carlo (VMC) method. This state possesses a local circular current and possibly causes the pseudogap phase in high-Tc cuprate superconductors. We introduce into the trial function a configuration dependent phase factor, which was recently shown to be indispensable to stabilize current-carrying states. We pay attention to the energy gain as a function of adjacent O-O hopping
Simulation of light-field camera imaging based on ray splitting Monte Carlo method
NASA Astrophysics Data System (ADS)
Liu, Bin; Yuan, Yuan; Li, Sai; Shuai, Yong; Tan, He-Ping
2015-11-01
As microlens technology matures, studies of structural design and reconstruction algorithm optimization for light-field cameras are increasing. However, few of these studies address numerical physical simulation of the camera, and it is difficult to track lighting technology for forward simulations because of its low efficiency. In this paper, we develop a Monte Carlo method (MCM) based on ray splitting and build a physical model of a light-field camera with a microlens array to simulate its imaging and refocusing processes. The model enables simulation of different imaging modalities, and will be useful for camera structural design and error analysis system construction.
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
NASA Astrophysics Data System (ADS)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
Figueira, C; Di Maria, S; Baptista, M; Mendes, M; Madeira, P; Vaz, P
2015-07-01
Computed tomography (CT) is one of the most used techniques in medical diagnosis, and its use has become one of the main sources of exposure of the population to ionising radiation. This work concentrates on the paediatric patients, since children exhibit higher radiosensitivity than adults. Nowadays, patient doses are estimated through two standard CT dose index (CTDI) phantoms as a reference to calculate CTDI volume (CTDI vol) values. This study aims at improving the knowledge about the radiation exposure to children and to better assess the accuracy of the CTDI vol method. The effectiveness of the CTDI vol method for patient dose estimation was then investigated through a sensitive study, taking into account the doses obtained by three methods: CTDI vol measured, CTDI vol values simulated with Monte Carlo (MC) code MCNPX and the recent proposed method Size-Specific Dose Estimate (SSDE). In order to assess organ doses, MC simulations were executed with paediatric voxel phantoms. PMID:25883302
A steady-state convergence detection method for Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Karchani, Abolfazl; Ejtehadi, Omid; Myong, Rho Shin
2014-12-01
In the direct simulation Monte Carlo (DSMC), exclusion of microscopic data sampled in the unsteady phase can accelerate the convergence and lead to more accurate results in the steady state problem. In this study, a new method for detection of the steady state onset, called Probabilistic Automatic Reset Sampling (PARS), is introduced. The new method can detect the steady state automatically and reset sample after satisfying the reset criteria based on statistics. The method is simple and does not need any user-specified inputs. The simulation results show that the proposed strategy can work well even in condition with constant number of particles inside the domain which was the main drawback of the previous methods.
NASA Astrophysics Data System (ADS)
Iakovidis, S.; Apostolidis, C.; Samaras, T.
2015-04-01
The objective of the present work is the application of the Monte Carlo method (GUMS1) for evaluating uncertainty in electromagnetic field measurements and the comparison of the results with the ones obtained using the 'standard' method (GUM). In particular, the two methods are applied in order to evaluate the field measurement uncertainty using a frequency selective radiation meter and the Total Exposure Quotient (TEQ) uncertainty. Comparative results are presented in order to highlight cases where GUMS1 results deviate significantly from the ones obtained using GUM, such as the presence of a non-linear mathematical model connecting the inputs with the output quantity (case of the TEQ model) or the presence of a dominant nonnormal distribution of an input quantity (case of U-shaped mismatch uncertainty). The deviation of the results obtained from the two methods can even lead to different decisions regarding the conformance with the exposure reference levels.
Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method
NASA Astrophysics Data System (ADS)
Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin
2015-12-01
The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.
Chen, I.J.; Gelbard, E.M.
1988-07-01
The narrow resonance (NR) approximation has, in the past, been applied to regular lattices with fairly simple unit cells. Attempts to use the NR approximation to deal with fine details of the lattice structure, or with complicated lattice cells, have generally been based on assumptions and approximations that are rather difficult to evaluate. A benchmark method is developed in which slowing down is still treated in the NR approximation, but spatial neutron transport is handled by Monte Carlo. This benchmark method is used to evaluate older methods for analyzing the double-heterogeneity effect in fast reactors, and for computing resonance integrals in the PROTEUS lattices. New methods for treating the PROTEUS lattices are proposed.
Hunter, J. L.; Sutton, T. M.
2013-07-01
In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amount of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)
Geostatistical approach to bayesian inversion of geophysical data: Markov chain Monte Carlo method
NASA Astrophysics Data System (ADS)
Oh, S.-H.; Kwon, B.-D.
2001-08-01
This paper presents a practical and objective procedure for a Bayesian inversion of geophysical data. We have applied geostatistical techniques such as kriging and simulation algorithms to acquire a prior model information. Then the Markov chain Monte Carlo (MCMC) method is adopted to infer the characteristics of the marginal distributions of model parameters. Geostatistics which is based upon a variogram model provides a means to analyze and interpret the spatially distributed data. For Bayesian inversion of dipole-dipole resistivity data, we have used the indicator kriging and simulation techniques to generate cumulative density functions from Schlumberger and well logging data for obtaining a prior information by cokriging and simulations from covariogram models. Indicator approaches make it possible to incorporate non-parametric information into the probabilistic density function. We have also adopted the Markov chain Monte Carlo approach, based on Gibbs sampling, to examine the characteristics of a posterior probability density function and marginal distributions of each parameter. The MCMC technique provides a robust result from which information given by the indicator method, that is fundamentally non-parametric, is fully extracted. We have used the a prior information proposed by the geostatistical method as the full conditional distribution for Gibbs sampling. And to implement Gibbs sampler, we have applied the modified Simulated Annealing (SA) algorithm which effectively searched for global model space. This scheme provides a more effective and robust global sampling algorithm as compared to the previous study.
Multilevel Monte Carlo Method with Application to Uncertainty Quantification in Reservoir Simulation
NASA Astrophysics Data System (ADS)
Lu, D.; Zhang, G.; Webster, C.; Barbier, C. N.
2014-12-01
The rational management of oil and gas reservoirs requires understanding of their response to existing and planned schemes of exploitation and operation. Such understanding requires analyzing and quantifying the influence of the subsurface uncertainty on predictions of oil and gas production. As the subsurface properties are typically heterogeneous causing a large number of model parameters, the dimension independent Monte Carlo (MC) method is usually used for uncertainty quantification (UQ). However, the standard MC simulation is computationally expensive because a large number of model executions are required and each model execution is costly simulated on a fine scale spatial grid to ensure accuracy. This study describes a multilevel Monte Carlo (MLMC) method for UQ in reservoir simulation. MLMC is a variance reduction technique for the standard MC. It improves computational efficiency by conducting simulations on a geometric sequence of grids, a larger number of simulations on coarse grids and fewer simulations on fine grids. In this study, we applied the MLMC method to a highly heterogeneous reservoir model from the tenth SPE project. We estimated both the expectation and the probability distribution of oil productions to quantify the influence of subsurface uncertainty. The results indicate that MLMC can achieve the same accuracy as standard MC with a significantly reduced cost, e.g., about 80-90% and 70-90% computational savings in estimating expectations and approximating probability distributions, respectively.
Dynamic Load Balancing for Petascale Quantum Monte Carlo Applications: The Alias Method
Sudheer, C. D.; Krishnan, S.; Srinivasan, Ashok; Kent, Paul R
2013-01-01
Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
NASA Astrophysics Data System (ADS)
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is a highly accurate Quantum Monte Carlo method for electronic structure calculations of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency on parallel machines. This step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm: a simple renumbering of the MPI ranks based on proximity and a space filling curve significantly improves the MPI Allgather performance. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL show up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that require load balancing.
Determination of the spatial response of neutron based analysers using a Monte Carlo based method
Tickner
2000-10-01
One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal. PMID:11003485
The applicability of certain Monte Carlo methods to the analysis of interacting polymers
Krapp, D.M. Jr.
1998-05-01
The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.
A Bayesian Monte Carlo Markov Chain Method for the Analysis of GPS Position Time Series
NASA Astrophysics Data System (ADS)
Olivares, German; Teferle, Norman
2013-04-01
Position time series from continuous GPS are an essential tool in many areas of the geosciences and are, for example, used to quantify long-term movements due to processes such as plate tectonics or glacial isostatic adjustment. It is now widely established that the stochastic properties of the time series do not follow a random behavior and this affects parameter estimates and associated uncertainties. Consequently, a comprehensive knowledge of the stochastic character of the position time series is crucial in order to obtain realistic error bounds and for this a number of methods have already been applied successfully. We present a new Bayesian Monte Carlo Markov Chain (MCMC) method to simultaneously estimate the model and the stochastic parameters of the noise in GPS position time series. This method provides a sample of the likelihood function and thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. One advantage of the MCMC method is that the computational time increases linearly with the number of parameters, hence being very suitable for dealing with a high number of parameters. A second advantage is that the properties of the estimator used in this method do not depend on the stationarity of the time series. At least on a theoretical level, no other estimator has been shown to have this feature. Furthermore, the MCMC method provides a means to detect multi-modality of the parameter estimates. We present an evaluation of the new MCMC method through comparison with widely used optimization and empirical methods for the analysis of GPS position time series.
Uncertainty analysis using Monte Carlo method in the measurement of phase by ESPI
Anguiano Morales, Marcelino; Martinez, Amalia; Rayas, J. A.; Cordero, Raul R.
2008-04-15
A method for simultaneously measuring whole field in-plane displacements by using optical fiber and based on the dual-beam illumination principle electronic speckle pattern interferometry (ESPI) is presented in this paper. A set of single mode optical fibers and beamsplitter are employed to split the laser beam into four beams of equal intensity.One pair of fibers is utilized to illuminate the sample in the horizontal plane so it is sensitive only to horizontal in-plane displacement. Another pair of optical fibers is set to be sensitive only to vertical in-plane displacement. Each pair of optical fibers differs in longitude to avoid unwanted interference. By means of a Fourier-transform method of fringe-pattern analysis (Takeda method), we can obtain the quantitative data of whole field displacements. We found the uncertainty associated with the phases by mean of Monte Carlo-based technique.
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method. PMID:24338601
An asymptotic preserving Monte Carlo method for the multispecies Boltzmann equation
NASA Astrophysics Data System (ADS)
Zhang, Bin; Liu, Hong; Jin, Shi
2016-01-01
An asymptotic preserving (AP) scheme is efficient in solving multiscale kinetic equations with a wide range of the Knudsen number. In this paper, we generalize the asymptotic preserving Monte Carlo method (AP-DSMC) developed in [25] to the multispecies Boltzmann equation. This method is based on the successive penalty method [26] originated from the BGK-penalization-based AP scheme developed in [7]. For the multispecies Boltzmann equation, the penalizing Maxwellian should use the unified Maxwellian as suggested in [12]. We give the details of AP-DSMC for multispecies Boltzmann equation, show its AP property, and verify through several numerical examples that the scheme can allow time step much larger than the mean free time, thus making it much more efficient for flows with possibly small Knudsen numbers than the classical DSMC.
Torsional diffusion Monte Carlo: A method for quantum simulations of proteins
NASA Astrophysics Data System (ADS)
Clary, David C.
2001-06-01
The quantum diffusion Monte Carlo (DMC) method is extended to the treatment of coupled torsional motions in proteins. A general algorithm and computer program has been developed by interfacing this torsional-DMC method with all-atom force-fields for proteins. The method gives the zero-point energy and atomic coordinates averaged over the coupled torsional motions in the quantum ground state of the protein. Application of the new algorithm is made to the proteins gelsolin (356 atoms and 142 torsions) and gp41-HIV (1101 atoms and 452 torsions). The results indicate that quantum-dynamical effects are important for the energies and geometries of typical proteins such as these.
Adapting phase-switch Monte Carlo method for flexible organic molecules
NASA Astrophysics Data System (ADS)
Bridgwater, Sally; Quigley, David
2014-03-01
The role of cholesterol in lipid bilayers has been widely studied via molecular simulation, however, there has been relatively little work on crystalline cholesterol in biological environments. Recent work has linked the crystallisation of cholesterol in the body with heart attacks and strokes. Any attempt to model this process will require new models and advanced sampling methods to capture and quantify the subtle polymorphism of solid cholesterol, in which two crystalline phases are separated by a phase transition close to body temperature. To this end, we have adapted phase-switch Monte Carlo for use with flexible molecules, to calculate the free energy between crystal polymorphs to a high degree of accuracy. The method samples an order parameter , which divides a displacement space for the N molecules, into regions energetically favourable for each polymorph; which is traversed using biased Monte Carlo. Results for a simple model of butane will be presented, demonstrating that conformational flexibility can be correctly incorporated within a phase-switching scheme. Extension to a coarse grained model of cholesterol and the resulting free energies will be discussed.
Efficient continuous-time quantum Monte Carlo method for the ground state of correlated fermions
NASA Astrophysics Data System (ADS)
Wang, Lei; Iazzi, Mauro; Corboz, Philippe; Troyer, Matthias
2015-06-01
We present the ground state extension of the efficient continuous-time quantum Monte Carlo algorithm for lattice fermions of M. Iazzi and M. Troyer, Phys. Rev. B 91, 241118 (2015), 10.1103/PhysRevB.91.241118. Based on continuous-time expansion of an imaginary-time projection operator, the algorithm is free of systematic error and scales linearly with projection time and interaction strength. Compared to the conventional quantum Monte Carlo methods for lattice fermions, this approach has greater flexibility and is easier to combine with powerful machinery such as histogram reweighting and extended ensemble simulation techniques. We discuss the implementation of the continuous-time projection in detail using the spinless t -V model as an example and compare the numerical results with exact diagonalization, density matrix renormalization group, and infinite projected entangled-pair states calculations. Finally we use the method to study the fermionic quantum critical point of spinless fermions on a honeycomb lattice and confirm previous results concerning its critical exponents.
KERR, REX A.; BARTOL, THOMAS M.; KAMINSKY, BORIS; DITTRICH, MARKUS; CHANG, JEN-CHIEN JACK; BADEN, SCOTT B.; SEJNOWSKI, TERRENCE J.; STILES, JOEL R.
2010-01-01
Many important physiological processes operate at time and space scales far beyond those accessible to atom-realistic simulations, and yet discrete stochastic rather than continuum methods may best represent finite numbers of molecules interacting in complex cellular spaces. We describe and validate new tools and algorithms developed for a new version of the MCell simulation program (MCell3), which supports generalized Monte Carlo modeling of diffusion and chemical reaction in solution, on surfaces representing membranes, and combinations thereof. A new syntax for describing the spatial directionality of surface reactions is introduced, along with optimizations and algorithms that can substantially reduce computational costs (e.g., event scheduling, variable time and space steps). Examples for simple reactions in simple spaces are validated by comparison to analytic solutions. Thus we show how spatially realistic Monte Carlo simulations of biological systems can be far more cost-effective than often is assumed, and provide a level of accuracy and insight beyond that of continuum methods. PMID:20151023
Efficient Markov Chain Monte Carlo Methods for Decoding Neural Spike Trains
Ahmadian, Yashar; Pillow, Jonathan W.; Paninski, Liam
2016-01-01
Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is log-concave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly non-Gaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for Gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the hit-and-run algorithm performed better than other MCMC methods. Using these algorithms we show that for this latter class of priors the posterior mean estimate can have a considerably lower average error than MAP, whereas for Gaussian priors the two estimators have roughly equal efficiency. We also address the application of MCMC methods for extracting non-marginal properties of the posterior distribution. For example, by using MCMC to calculate the mutual information between the stimulus and response, we verify the validity of a computationally efficient Laplace approximation to this quantity for Gaussian priors in a wide range of model parameters; this makes direct model-based computation of the mutual information tractable even in the case of large observed neural populations, where methods based on binning the spike train fail. Finally, we consider the effect of uncertainty in the GLM parameters on the posterior estimators. PMID:20964539
NASA Astrophysics Data System (ADS)
Hipes, Paul G.
2011-05-01
Fixed-node diffusion Monte Carlo (FN-DMC) is an accurate and useful method for estimating the wave function and the energy of the quantum ground state of a many-fermion system. However, it has been shown that difficulties with the method may occur when it is applied to a degenerate excited state because the nodal surface of the degenerate trial function is generally insufficient to impose the complete symmetry properties of the trial function on the FN-DMC wave function. As a result, the tiling theorem and the symmetry-constrained variational principle may be violated by FN-DMC when the excited state is degenerate. There are two practical consequences for the study of degenerate excited states: The FN-DMC energy may lie below the energy of the lowest stationary state that transforms according to the same degenerate irreducible representation as the trial function; and the convergence of the FN-DMC energy with improvements in the trial function may not be quadratic. In this paper a diffusion Monte Carlo method for degenerate excited states is presented. It provides a direct generalization of the FN-DMC method, and when applied to the study of degenerate excited states, it has the support of the tiling theorem and the symmetry-constrained variational principle. The method is applied to the lowest degenerate state of a simple test problem in which FN-DMC has been shown to violate both the tiling theorem and the symmetry-constrained variational principle. The numerical results support the assertion that this method for degenerate excited states satisfies both the tiling theorem and the symmetry-constrained variational principle.
NASA Astrophysics Data System (ADS)
Hull, Anthony B.; Ambruster, C.; Jewell, E.
2012-01-01
Simple Monte Carlo simulations can assist both the cultural astronomy researcher while the Research Design is developed and the eventual evaluators of research products. Following the method we describe allows assessment of the probability for there to be false positives associated with a site. Even seemingly evocative alignments may be meaningless, depending on the site characteristics and the number of degrees of freedom the researcher allows. In many cases, an observer may have to limit comments to "it is nice and it might be culturally meaningful, rather than saying "it is impressive so it must mean something". We describe a basic language with an associated set of attributes to be cataloged. These can be used to set up simple Monte Carlo simulations for a site. Without collaborating cultural evidence, or trends with similar attributes (for example a number of sites showing the same anticipatory date), the Monte Carlo simulation can be used as a filter to establish the likeliness that the observed alignment phenomena is the result of random factors. Such analysis may temper any eagerness to prematurely attribute cultural meaning to an observation. For the most complete description of an archaeological site, we urge researchers to capture the site attributes in a manner which permits statistical analysis. We also encourage cultural astronomers to record that which does not work, and that which may seem to align, but has no discernable meaning. Properly reporting situational information as tenets of the research design will reduce the subjective nature of archaeoastronomical interpretation. Examples from field work will be discussed.
Monte Carlo method for polarized radiative transfer in gradient-index media
NASA Astrophysics Data System (ADS)
Zhao, J. M.; Tan, J. Y.; Liu, L. H.
2015-02-01
Light transfer in gradient-index media generally follows curved ray trajectories, which will cause light beam to converge or diverge during transfer and induce the rotation of polarization ellipse even when the medium is transparent. Furthermore, the combined process of scattering and transfer along curved ray path makes the problem more complex. In this paper, a Monte Carlo method is presented to simulate polarized radiative transfer in gradient-index media that only support planar ray trajectories. The ray equation is solved to the second order to address the effect induced by curved ray trajectories. Three types of test cases are presented to verify the performance of the method, which include transparent medium, Mie scattering medium with assumed gradient index distribution, and Rayleigh scattering with realistic atmosphere refractive index profile. It is demonstrated that the atmospheric refraction has significant effect for long distance polarized light transfer.
Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Boyd, Iain D.
1991-01-01
A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.
The Linked Neighbour List (LNL) method for fast off-lattice Monte Carlo simulations of fluids
NASA Astrophysics Data System (ADS)
Mazzeo, M. D.; Ricci, M.; Zannoni, C.
2010-03-01
We present a new algorithm, called linked neighbour list (LNL), useful to substantially speed up off-lattice Monte Carlo simulations of fluids by avoiding the computation of the molecular energy before every attempted move. We introduce a few variants of the LNL method targeted to minimise memory footprint or augment memory coherence and cache utilisation. Additionally, we present a few algorithms which drastically accelerate neighbour finding. We test our methods on the simulation of a dense off-lattice Gay-Berne fluid subjected to periodic boundary conditions observing a speedup factor of about 2.5 with respect to a well-coded implementation based on a conventional link-cell. We provide several implementation details of the different key data structures and algorithms used in this work.
Brandão, Eric; Flesch, Rodolfo C C; Lenzi, Arcanjo; Flesch, Carlos A
2011-07-01
The pressure-particle velocity (PU) impedance measurement technique is an experimental method used to measure the surface impedance and the absorption coefficient of acoustic samples in situ or under free-field conditions. In this paper, the measurement uncertainty of the the absorption coefficient determined using the PU technique is explored applying the Monte Carlo method. It is shown that because of the uncertainty, it is particularly difficult to measure samples with low absorption and that difficulties associated with the localization of the acoustic centers of the sound source and the PU sensor affect the quality of the measurement roughly to the same extent as the errors in the transfer function between pressure and particle velocity do. PMID:21786864
Gong, Xingchu; Li, Yao; Chen, Huali; Qu, Haibin
2015-01-01
A design space approach was applied to optimize the extraction process of Danhong injection. Dry matter yield and the yields of five active ingredients were selected as process critical quality attributes (CQAs). Extraction number, extraction time, and the mass ratio of water and material (W/M ratio) were selected as critical process parameters (CPPs). Quadratic models between CPPs and CQAs were developed with determination coefficients higher than 0.94. Active ingredient yields and dry matter yield increased as the extraction number increased. Monte-Carlo simulation with models established using a stepwise regression method was applied to calculate the probability-based design space. Step length showed little effect on the calculation results. Higher simulation number led to results with lower dispersion. Data generated in a Monte Carlo simulation following a normal distribution led to a design space with a smaller size. An optimized calculation condition was obtained with 10000 simulation times, 0.01 calculation step length, a significance level value of 0.35 for adding or removing terms in a stepwise regression, and a normal distribution for data generation. The design space with a probability higher than 0.95 to attain the CQA criteria was calculated and verified successfully. Normal operating ranges of 8.2-10 g/g of W/M ratio, 1.25-1.63 h of extraction time, and two extractions were recommended. The optimized calculation conditions can conveniently be used in design space development for other pharmaceutical processes. PMID:26020778
Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.
2012-07-01
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)
Calculations of alloy phases with a direct Monte-Carlo method
Faulkner, J.S.; Wang, Yang; Horvath, E.A.; Stocks, G.M.
1994-09-01
A method for calculating the boundaries that describe solid-solid phase transformations in the phase diagrams of alloys is described. The method is first-principles in the sense that the only input is the atomic numbers of the constituents. It proceeds from the observation that the crux of the Monte-Carlo method for obtaining the equilibrium distribution of atoms in an alloy is a calculation of the energy required to replace an A atom on site i with a B atom when the configuration of the atoms on the neighboring sites, {kappa}, is specified, {delta}H{sub {kappa}}(A{yields}B) = E{sub B}{kappa} -E{sub A}{kappa}. Normally, this energy difference is obtained by introducing interatomic potentials, v{sub ij}, into an Ising Hamiltonian, but the authors calculate it using the embedded cluster method (ECM). In the ECM an A or B atom is placed at the center of a cluster of atoms with the specified configuration K, and the atoms on all the other sites in the alloy are simulated by the effective scattering matrix obtained from the coherent potential approximation. The interchange energy is calculated directly from the electronic structure of the cluster. The table of {delta}H{sub {kappa}}(A{yields}B)`s for all configurations K and several alloy concentrations is used in a Monte Carlo calculation that predicts the phase of the alloy at any temperature and concentration. The detailed shape of the miscibility gaps in the palladium-rhodium and copper-nickel alloy systems are shown.
Electronic structure of transition metal and f-electron oxides by quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Mitas, L.; Hu, S.; Kolorenc, J.
2012-12-01
We report on many-body quantum Monte Carlo (QMC) calculations of electronic structure of systems with strong correlation effects. These methods have been applied to ambient and high pressure transition metal oxides and, very recently, to selected f-electron oxides such as mineral thorianite (ThO2). QMC methods enabled us to calculate equilibrium characteristics such as cohesion, equilibrium lattice constants, bulk moduli, and electronic gaps with an excellent agreement with experiment without any non-variational parameters. In addition, for selected cases, the equations of state were calculated as well. The calculations were carried out using the state-of-the-art twist-averaged sampling of the Brilloiun zone, small-core Dirac-Fock pseudopotentials and one-particle orbitals from hybrid DFT functionals with varying weight of the exact exchange. This enabled us to build high-accuracy Slater-Jastrow explicitly correlated wavefunctions. In particular, we have employed optimization of the weight of the exact exchange in B3LYP and PBE0 functionals to minimize the fixed-node error in the diffusion Monte Carlo calculations. Instead of empirical fitting, we therefore use variational and explicitly many-body QMC method to find the value of the optimal weight, which falls between 15 and 30%. This finding is further supported also by recent calculations of transition metal-organic systems such as transition metal-porphyrins and others, showing thus a very wide range of its applicability. The calculations of ThO_2 appears to follow the same pattern and enabled to reproduce very well the experimental cohesion and very large electronic gap. In addition, we have made an important progress also in explicit treatment of the spin-orbit interaction which has been so far neglected in QMC calculations. Our studies illustrate the remarkable capabilities of QMC methods for strongly correlated solid systems.
Quantum Monte Carlo method for the ground state of many-boson systems
Purwanto, Wirawan; Zhang Shiwei
2004-11-01
We formulate a quantum Monte Carlo (QMC) method for calculating the ground state of many-boson systems. The method is based on a field-theoretical approach, and is closely related to existing fermion auxiliary-field QMC methods which are applied in several fields of physics. The ground-state projection is implemented as a branching random walk in the space of permanents consisting of identical single-particle orbitals. Any single-particle basis can be used, and the method is in principle exact. We illustrate this method with a trapped atomic boson gas, where the atoms interact via an attractive or repulsive contact two-body potential. We choose as the single-particle basis a real-space grid. We compare with exact results in small systems and arbitrarily sized systems of untrapped bosons with attractive interactions in one dimension, where analytical solutions exist. We also compare with the corresponding Gross-Pitaevskii (GP) mean-field calculations for trapped atoms, and discuss the close formal relation between our method and the GP approach. Our method provides a way to systematically improve upon GP while using the same framework, capturing interaction and correlation effects with a stochastic, coherent ensemble of noninteracting solutions. We discuss various algorithmic issues, including importance sampling and the back-propagation technique for computing observables, and illustrate them with numerical studies. We show results for systems with up to N{approx}400 bosons.
A modular method to handle multiple time-dependent quantities in Monte Carlo simulations
Shin, J; Perl, J; Schmann, J; Paganetti, H; Faddegon, BA
2015-01-01
A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call Time Features. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc., takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel (RMW) accompanied by beam current modulation produces a spread-out Bragg Peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method. PMID:22572201
Forwards and Backwards Modelling of Ashfall Hazards in New Zealand by Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Hurst, T.; Smith, W. D.; Bibby, H. M.
2003-12-01
We have developed a technique for quantifying the probability of particular thicknesses of airfall ash from a volcanic eruption at any given site, using Monte Carlo methods, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo formulation then allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. We show how this method can handle the effects of multiple volcanic sources by aggregation, each source with its own characteristics. This follows a similar procedure which we have used for earthquake hazard assessment. The result is estimates of the frequency with which any given depth of ash is likely to be deposited at the selected site, accounting for all volcanoes that might affect it. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from ash cores in Auckland can give useful bounds for the likely total volumes erupted from the volcano Mt Egmont/Mt Taranaki, 280 km away, during the last 140,000 years, information difficult to obtain from local tephra stratigraphy.
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
A modular method to handle multiple time-dependent quantities in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Shin, J.; Perl, J.; Schmann, J.; Paganetti, H.; Faddegon, B. A.
2012-06-01
A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call Time Features. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method.
Finite-Temperature Variational Monte Carlo Method for Strongly Correlated Electron Systems
NASA Astrophysics Data System (ADS)
Takai, Kensaku; Ido, Kota; Misawa, Takahiro; Yamaji, Youhei; Imada, Masatoshi
2016-03-01
A new computational method for finite-temperature properties of strongly correlated electrons is proposed by extending the variational Monte Carlo method originally developed for the ground state. The method is based on the path integral in the imaginary-time formulation, starting from the infinite-temperature state that is well approximated by a small number of certain random initial states. Lower temperatures are progressively reached by the imaginary-time evolution. The algorithm follows the framework of the quantum transfer matrix and finite-temperature Lanczos methods, but we extend them to treat much larger system sizes without the negative sign problem by optimizing the truncated Hilbert space on the basis of the time-dependent variational principle (TDVP). This optimization algorithm is equivalent to the stochastic reconfiguration (SR) method that has been frequently used for the ground state to optimally truncate the Hilbert space. The obtained finite-temperature states allow an interpretation based on the thermal pure quantum (TPQ) state instead of the conventional canonical-ensemble average. Our method is tested for the one- and two-dimensional Hubbard models and its accuracy and efficiency are demonstrated.
Investigation of a V{sub 15} magnetic molecular nanocluster by the Monte Carlo method
Khizriev, K. Sh.; Dzhamalutdinova, I. S.; Taaev, T. A.
2013-06-15
Exchange interactions in a V{sub 15} magnetic molecular nanocluster are considered, and the process of magnetization reversal for various values of the set of exchange constants is analyzed by the Monte Carlo method. It is shown that the best agreement between the field dependence of susceptibility and experimental results is observed for the following set of exchange interaction constants in a V{sub 15} magnetic molecular nanocluster: J = 500 K, J Prime = 150 K, J Double-Prime = 225 K, J{sub 1} = 50 K, and J{sub 2} = 50 K. It is observed for the first time that, in a strong magnetic field, for each of the three transitions from low-spin to high-spin states, the heat capacity exhibits two closely spaced maxima.
An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks
NASA Technical Reports Server (NTRS)
Kim, Stacy
2011-01-01
Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.
A Monte Carlo Method for Projecting Uncertainty in 2D Lagrangian Trajectories
NASA Astrophysics Data System (ADS)
Robel, A.; Lozier, S.; Gary, S. F.
2009-12-01
In this study, a novel method is proposed for modeling the propagation of uncertainty due to subgrid-scale processes through a Lagrangian trajectory advected by ocean surface velocities. The primary motivation and application is differentiating between active and passive trajectories for sea turtles as observed through satellite telemetry. A spatiotemporal launch box is centered on the time and place of actual launch and populated with launch points. Synthetic drifters are launched at each of these locations, adding, at each time step along the trajectory, Monte Carlo perturbations in velocity scaled to the natural variability of the velocity field. The resulting trajectory cloud provides a dynamically evolving density field of synthetic drifter locations that represent the projection of subgrid-scale uncertainty out in time. Subsequently, by relaunching synthetic drifters at points along the trajectory, plots are generated in a daisy chain configuration of the “most likely passive pathways” for the drifter.
On Choosing Effective Elasticity Tensors Using a Monte-Carlo Method
NASA Astrophysics Data System (ADS)
Danek, Tomasz; Slawinski, Michael A.
2015-02-01
A generally anisotropic elasticity tensor can be related to its closest counterparts in various symmetry classes. We refer to these counterparts as effective tensors in these classes. In finding effective tensors, we do not assume a priori orientations of their symmetry planes and axes. Knowledge of orientations of Hookean solids allows us to infer properties of materials represented by these solids. Obtaining orientations and parameter values of effective tensors is a highly nonlinear process involving finding absolute minima for orthogonal projections under all three-dimensional rotations. Given the standard deviations of the components of a generally anisotropic tensor, we examine the influence of measurement errors on the properties of effective tensors. We use a global optimization method to generate thousands of realizations of a generally anisotropic tensor, subject to errors. Using this optimization, we perform a Monte Carlo analysis of distances between that tensor and its counterparts in different symmetry classes, as well as of their orientations and elasticity parameters
On choosing effective elasticity tensors using a Monte-Carlo method
NASA Astrophysics Data System (ADS)
Danek, Tomasz; Slawinski, Michael A.
2014-03-01
A generally anisotropic elasticity tensor can be related to its closest counterparts in various symmetry classes. We refer to these counterparts as effective tensors in these classes. In finding effective tensors, we do not assume a priori orientations of their symmetry planes and axes. Knowledge of orientations of Hookean solids allows us to infer properties of materials represented by these solids. Obtaining orientations and parameter values of effective tensors is a highly nonlinear process involving finding absolute minima for orthogonal projections under all three-dimensional rotations. Given the standard deviations of the components of a generally anisotropic tensor, we examine the influence of measurement errors on the properties of effective tensors. We use a global optimization method to generate thousands of realizations of a generally anisotropic tensor, subject to errors. Using this optimization, we perform a Monte Carlo analysis of distances between that tensor and its counterparts in different symmetry classes, as well as of their orientations and elasticity parameters.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, S. R.; Wilson, P. P. H.; Evans, T. M.
2013-07-01
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)
Pozzi, Sara A; Downar, Thomas J; Padovani, Enrico; Clarke, Shaun D
2006-01-01
This work illustrates a methodology based on photon interrogation and coincidence counting for determining the characteristics of fissile material. The feasibility of the proposed methods was demonstrated using a Monte Carlo code system to simulate the full statistics of the neutron and photon field generated by the photon interrogation of fissile and non-fissile materials. Time correlation functions between detectors were simulated for photon beam-on and photon beam-off operation. In the latter case, the correlation signal is obtained via delayed neutrons from photofission, which induce further fission chains in the nuclear material. An analysis methodology was demonstrated based on features selected from the simulated correlation functions and on the use of artificial neural networks. We show that the methodology can reliably differentiate between highly enriched uranium and plutonium. Furthermore, the mass of the material can be determined with a relative error of about 12%. Keywords: MCNP, MCNP-PoliMi, Artificial neural network, Correlation measurement, Photofission
NASA Technical Reports Server (NTRS)
Haviland, J. K.
1974-01-01
The results are reported of two unrelated studies. The first was an investigation of the formulation of the equations for non-uniform unsteady flows, by perturbation of an irrotational flow to obtain the linear Green's equation. The resulting integral equation was found to contain a kernel which could be expressed as the solution of the adjoint flow equation, a linear equation for small perturbations, but with non-constant coefficients determined by the steady flow conditions. It is believed that the non-uniform flow effects may prove important in transonic flutter, and that in such cases, the use of doublet type solutions of the wave equation would then prove to be erroneous. The second task covered an initial investigation into the use of the Monte Carlo method for solution of acoustical field problems. Computed results are given for a rectangular room problem, and for a problem involving a circular duct with a source located at the closed end.
NASA Astrophysics Data System (ADS)
King, Julian; Mortlock, Daniel; Webb, John; Murphy, Michael
2010-11-01
Recent attempts to constrain cosmological variation in the fine structure constant, ?, using quasar absorption lines have yielded two statistical samples which initially appear to be inconsistent. One of these samples was subsequently demonstrated to not pass consistency tests; it appears that the optimisation algorithm used to fit the model to the spectra failed. Nevertheless, the results of the other hinge on the robustness of the spectral fitting program VPFIT, which has been tested through simulation but not through direct exploration of the likelihood function. We present the application of Markov Chain Monte Carlo (MCMC) methods to this problem, and demonstrate that VPFIT produces similar values and uncertainties for ??/?, the fractional change in the fine structure constant, as our MCMC algorithm, and thus that VPFIT is reliable.
Determinant Quantum Monte Carlo method applied to the t-J model
NASA Astrophysics Data System (ADS)
Zujev, Aleksander; Fye, Richard; Scalettar, Richard
2009-03-01
The usual approach to simulating the t-J model with the Determinant Quantum Monte Carlo (DQMC) method starts with the Hubbard model with a finite on-site interaction U which is then increased to ``almost'' infinity. This approach, however, has considerable difficulties with large round-off errors (stability) and variances, and also a very bad fermion sign problem. In this talk, I will describe a different approach which starts with (almost) infinite U by means of a projector operator and further prohibiting double occupancy by using a modified creation operator. The new technique will be shown to solve some of these difficulties. Unfortunately, the sign problem remains significant. I will discuss the different attempts we have made to reduce it.
Markov Chain Monte Carlo Sampling Methods for 1D Seismic and EM Data Inversion
Energy Science and Technology Software Center (ESTSC)
2008-09-22
This software provides several Markov chain Monte Carlo sampling methods for the Bayesian model developed for inverting 1D marine seismic and controlled source electromagnetic (CSEM) data. The current software can be used for individual inversion of seismic AVO and CSEM data and for joint inversion of both seismic and EM data sets. The structure of the software is very general and flexible, and it allows users to incorporate their own forward simulation codes and rockmore » physics model codes easily into this software. Although the softwae was developed using C and C++ computer languages, the user-supplied codes can be written in C, C++, or various versions of Fortran languages. The software provides clear interfaces for users to plug in their own codes. The output of this software is in the format that the R free software CODA can directly read to build MCMC objects.« less
Studies of materials from simple metal atoms by quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Rasch, Kevin; Mitas, Lubos
2015-03-01
We carry out quantum Monte Carlo (QMC) calculations of systems from simple metal elements such as Li and Na with the goal of studying the cohesive/binding energies, structural characteristics as well as the accuracy of QMC methods for these elements. For Na we use test small-core pseudo potentials vs large-core pseudopotentials with the core polarization and relaxation correction potentials. We test orbital sets from several DFT functionals in order to assess the accuracy of the corresponding wave functions and fixed-node biases. It turns out that the valence correlations energies are very accurate, typically, 97% or higher in most of the tested systems. This provides a validation framework for further QMC studies of these systems in non-equilibrium conformations and at high pressures.
An analysis of the convergence of the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Galitzine, Cyril; Boyd, Iain D.
2015-05-01
In this article, a rigorous framework for the analysis of the convergence of the direct simulation Monte Carlo (DSMC) method is presented. It is applied to the simulation of two test cases: an axisymmetric jet at a Knudsen number of 0.01 and Mach number of 1 and a two-dimensional cylinder flow at a Knudsen of 0.05 and Mach 10. The rate of convergence of sampled quantities is found to be well predicted by an extended form of the Central Limit Theorem that takes into account the correlation of samples but requires the calculation of correlation spectra. A simplified analytical model that does not require correlation spectra is then constructed to model the effect of sample correlation. It is then used to obtain an a priori estimate of the convergence error.
Markov Chain Monte Carlo Sampling Methods for 1D Seismic and EM Data Inversion
2008-09-22
This software provides several Markov chain Monte Carlo sampling methods for the Bayesian model developed for inverting 1D marine seismic and controlled source electromagnetic (CSEM) data. The current software can be used for individual inversion of seismic AVO and CSEM data and for joint inversion of both seismic and EM data sets. The structure of the software is very general and flexible, and it allows users to incorporate their own forward simulation codes and rock physics model codes easily into this software. Although the softwae was developed using C and C++ computer languages, the user-supplied codes can be written in C, C++, or various versions of Fortran languages. The software provides clear interfaces for users to plug in their own codes. The output of this software is in the format that the R free software CODA can directly read to build MCMC objects.
NASA Astrophysics Data System (ADS)
Armas-Prez, Julio C.; Hernndez-Ortiz, Juan P.; de Pablo, Juan J.
2015-12-01
A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions.
Armas-Prez, Julio C; Hernndez-Ortiz, Juan P; de Pablo, Juan J
2015-12-28
A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions. PMID:26723642
Thermal studies of a superconducting current limiter using Monte-Carlo method
NASA Astrophysics Data System (ADS)
Lvque, J.; Rezzoug, A.
1999-07-01
Considering the increase of the fault current level in electrical network, the current limiters become very interesting. The superconducting limiters are based on the quasi-instantaneous intrinsic transition from superconducting state to normal resistive one. Without detection of default or given order, they reduce the constraints supported by electrical installations above the fault. To avoid the destruction of the superconducting coil, the temperature must not exceed a certain value. Therefore the design of a superconducting coil needs the simultaneous resolution of an electrical equation and a thermal one. This papers deals with a resolution of this coupled problem by the method of Monte-Carlo. This method allows us to calculate the evolution of the resistance of the coil as well as the current of limitation. Experimental results are compared with theoretical ones. L'augmentation des courants de dfaut dans les grands rseaux lectriques ravive l'intrt pour les limiteurs de courant. Les limiteurs supraconducteurs de courants peuvent limiter quasi-instantanment, sans donneur d'ordre ni dtection de dfaut, les courants de court-circuit rduisant ainsi les contraintes supportes par les installations lectriques situes en amont du dfaut. La limitation s'accompagne ncessairement de la transition du supraconducteur par dpassement de son courant critique. Pour viter la destruction de la bobine supraconductrice la temprature ne doit pas excder une certaine valeur. La conception d'une bobine supraconductrice exige donc la rsolution simultane d'une quation lectrique et d'une quation thermique. Nous prsentons une rsolution de ce problme electrothermique par la mthode de Monte-Carlo. Cette mthode nous permet de calculer l'volution de la rsistance de la bobine et du courant de limitation. Des rsultats exprimentaux sont compars avec les rsultats thoriques.
Multi-Physics Markov Chain Monte Carlo Methods for Subsurface Flows
NASA Astrophysics Data System (ADS)
Rigelo, J.; Ginting, V.; Rahunanthan, A.; Pereira, F.
2014-12-01
For CO2 sequestration in deep saline aquifers, contaminant transport in subsurface, and oil or gas recovery, we often need to forecast flow patterns. Subsurface characterization is a critical and challenging step in flow forecasting. To characterize subsurface properties we establish a statistical description of the subsurface properties that are conditioned to existing dynamic and static data. A Markov Chain Monte Carlo (MCMC) algorithm is used in a Bayesian statistical description to reconstruct the spatial distribution of rock permeability and porosity. The MCMC algorithm requires repeatedly solving a set of nonlinear partial differential equations describing displacement of fluids in porous media for different values of permeability and porosity. The time needed for the generation of a reliable MCMC chain using the algorithm can be too long to be practical for flow forecasting. In this work we develop fast and effective computational methods for generating MCMC chains in the Bayesian framework for the subsurface characterization. Our strategy consists of constructing a family of computationally inexpensive preconditioners based on simpler physics as well as on surrogate models such that the number of fine-grid simulations is drastically reduced in the generated MCMC chains. In particular, we introduce a huff-puff technique as screening step in a three-stage multi-physics MCMC algorithm to reduce the number of expensive final stage simulations. The huff-puff technique in the algorithm enables a better characterization of subsurface near wells. We assess the quality of the proposed multi-physics MCMC methods by considering Monte Carlo simulations for forecasting oil production in an oil reservoir.
Asselineau, Charles-Alexis; Zapata, Jose; Pye, John
2015-06-01
A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost. PMID:26072868
A First-Passage Kinetic Monte Carlo method for reactiondriftdiffusion processes
Mauro, Ava J.; Sigurdsson, Jon Karl; Shrake, Justin; Atzberger, Paul J.; Isaacson, Samuel A.
2014-02-15
Stochastic reactiondiffusion models are now a popular tool for studying physical systems in which both the explicit diffusion of molecules and noise in the chemical reaction process play important roles. The Smoluchowski diffusion-limited reaction model (SDLR) is one of several that have been used to study biological systems. Exact realizations of the underlying stochastic processes described by the SDLR model can be generated by the recently proposed First-Passage Kinetic Monte Carlo (FPKMC) method. This exactness relies on sampling analytical solutions to one and two-body diffusion equations in simplified protective domains. In this work we extend the FPKMC to allow for drift arising from fixed, background potentials. As the corresponding FokkerPlanck equations that describe the motion of each molecule can no longer be solved analytically, we develop a hybrid method that discretizes the protective domains. The discretization is chosen so that the driftdiffusion of each molecule within its protective domain is approximated by a continuous-time random walk on a lattice. New lattices are defined dynamically as the protective domains are updated, hence we will refer to our method as Dynamic Lattice FPKMC or DL-FPKMC. We focus primarily on the one-dimensional case in this manuscript, and demonstrate the numerical convergence and accuracy of our method in this case for both smooth and discontinuous potentials. We also present applications of our method, which illustrate the impact of drift on reaction kinetics.
ATR WG-MOX Fuel Pellet Burnup Measurement by Monte Carlo - Mass Spectrometric Method
Chang, Gray Sen I
2002-10-01
This paper presents a new method for calculating the burnup of nuclear reactor fuel, the MCWO-MS method, and describes its application to an experiment currently in progress to assess the suitability for use in light-water reactors of Mixed-OXide (MOX) fuel that contains plutonium derived from excess nuclear weapons material. To demonstrate that the available experience base with Reactor-Grade Mixed uranium-plutonium OXide (RGMOX) can be applied to Weapons-Grade (WG)-MOX in light water reactors, and to support potential licensing of MOX fuel made from weapons-grade plutonium and depleted uranium for use in United States reactors, an experiment containing WG-MOX fuel is being irradiated in the Advanced Test Reactor (ATR) at the Idaho National Engineering and Environmental Laboratory. Fuel burnup is an important parameter needed for fuel performance evaluation. For the irradiated MOX fuels Post-Irradiation Examination, the 148Nd method is used to measure the burnup. The fission product 148Nd is an ideal burnup indicator, when appropriate correction factors are applied. In the ATR test environment, the spectrum-dependent and burnup-dependent correction factors (see Section 5 for detailed discussion) can be substantial in high fuel burnup. The validated Monte Carlo depletion tool (MCWO) used in this study can provide a burnup-dependent correction factor for the reactor parameters, such as capture-to-fission ratios, isotopic concentrations and compositions, fission power, and spectrum in a straightforward fashion. Furthermore, the correlation curve generated by MCWO can be coupled with the 239Pu/Pu ratio measured by a Mass Spectrometer (in the new MCWO-MS method) to obtain a best-estimate MOX fuel burnup. A Monte Carlo - MCWO method can eliminate the generation of few-group cross sections. The MCWO depletion tool can analyze the detailed spatial and spectral self-shielding effects in UO2, WG-MOX, and reactor-grade mixed oxide (RG-MOX) fuel pins. The MCWO-MS tool only needs the MS-measured 239Pu/Pu ratio, without the measured isotope 148Nd concentration data, to determine the burnup accurately. MCWO-MS not only provided linear heat generation rate, Pu isotopic composition versus burnup, and burnup distributions within the WG-MOX fuel capsules, but also correctly pointed out the inconsistency in the large difference in burnups obtained by the 148Nd method.
The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations
Sellier, J.M. Dimov, I.
2014-09-15
The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practically unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.
Uncertainty Quantification of Prompt Fission Neutron Spectra Using the Unified Monte Carlo Method
NASA Astrophysics Data System (ADS)
Rising, M. E.; Talou, P.; Prinja, A. K.
2014-04-01
In the ENDF/B-VII.1 nuclear data library, the existing covariance evaluations of the prompt fission neutron spectra (PFNS) were computed by combining the available experimental differential data with theoretical model calculations, relying on the use of a first-order linear Bayesan approach, the Kalman filter. This approach assumes that the theoretical model response to changes in input model parameters be linear about the a priori central values. While the Unified Monte Carlo (UMC) method remains a Bayesian approach, like the Kalman filter, this method does not make any assumption about the linearity of the model response or shape of the a posteriori distribution of the parameters. By sampling from a distribution centered about the a priori model parameters, the UMC method computes the moments of the a posteriori parameter distribution. As the number of samples increases, the statistical noise in the computed a posteriori moments decrease and an appropriately converged solution corresponding to the true mean of the a posteriori PDF results. The UMC method has been successfully implemented using both a uniform and Gaussian sampling distribution and has been used for the evaluation of the PFNS and its associated uncertainties. While many of the UMC results are similar to the first-order Kalman filter results, significant differences are shown when experimental data are excluded from the evaluation process. When experimental data are included a few small nonlinearities are present in the high outgoing energy tail of the PFNS.
Investigation of the uniqueness of the reverse Monte Carlo method: Studies on liquid water
NASA Astrophysics Data System (ADS)
Jedlovszky, P.; Bak, I.; Plinks, G.; Radnai, T.; Soper, A. K.
1996-07-01
Reverse Monte Carlo (RMC) simulation of liquid water has been performed on the basis of experimental partial pair correlation functions. The resulted configurations were analyzed in various aspects; the hydrogen bond angle distribution, three body correlation and orientational correlation were calculated. The question of the uniqueness of the RMC method was also examined. In order to do this, two conventional computer simulations of liquid water with different potential models were performed, and the resulted pair correlation function sets were fitted by RMC simulations. The resulted configurations were then compared to the original configurations to study how the RMC method can reproduce the original structure. We showed that the configurations produced by the RMC method are not uniquely related to the pair correlation functions even if the interactions in the original system were pairwise additive. Therefore the difference between the original simulated and the RMC configurations can be a measure of the uncertainty of the RMC results on real water. We found that RMC produces less ordered structure than the original one from various aspects. However, the orientational correlations were reproduced rather successfully. The RMC method exaggerates the amount of the close packed patches in the structure, although these patches certainly exist in liquid water.
Heath, Emily; Seuntjens, Jan
2006-02-15
In this work we present a method of calculating dose in deforming anatomy where the position and shape of each dose voxel is tracked as the anatomy changes. The EGSnrc/DOSXYZnrc Monte Carlo code was modified to calculate dose in voxels that are deformed according to deformation vectors obtained from a nonlinear image registration algorithm. The defDOSXYZ code was validated by consistency checks and by comparing calculations against DOSXYZnrc calculations. Calculations in deforming phantoms were compared with a dose remapping method employing trilinear interpolation. Dose calculations with the deforming voxels agree with DOSXYZnrc calculations within 1%. In simple deforming rectangular phantoms the trilinear dose remapping method was found to underestimate the dose by up to 29% for a 1.0 cm voxel size within the field, with larger discrepancies in regions of steep dose gradients. The agreement between the two calculation methods improved with smaller voxel size and deformation magnitude. A comparison of dose remapping from Inhale to Exhale in an anatomical breathing phantom demonstrated that dose deformations are underestimated by up to 16% in the penumbra and 8% near the surface with trilinear interpolation.
Low-Density Nozzle Flow by the Direct Simulation Monte Carlo and Continuum Methods
NASA Technical Reports Server (NTRS)
Chung, Chang-Hong; Kim, Sku C.; Stubbs, Robert M.; Dewitt, Kenneth J.
1994-01-01
Two different approaches, the direct simulation Monte Carlo (DSMC) method based on molecular gasdynamics, and a finite-volume approximation of the Navier-Stokes equations, which are based on continuum gasdynamics, are employed in the analysis of a low-density gas flow in a small converging-diverging nozzle. The fluid experiences various kinds of flow regimes including continuum, slip, transition, and free-molecular. Results from the two numerical methods are compared with Rothe's experimental data, in which density and rotational temperature variations along the centerline and at various locations inside a low-density nozzle were measured by the electron-beam fluorescence technique. The continuum approach showed good agreement with the experimental data as far as density is concerned. The results from the DSMC method showed good agreement with the experimental data, both in the density and the rotational temperature. It is also shown that the simulation parameters, such as the gas/surface interaction model, the energy exchange model between rotational and translational modes, and the viscosity-temperature exponent, have substantial effects on the results of the DSMC method.
Adatom Density Kinetic Monte Carlo (AD-KMC): A new method for fast growth simulation
NASA Astrophysics Data System (ADS)
Mandreoli, Lorenzo; Neugebauer, Joerg
2002-03-01
The main approach to perform growth simulations on an atomistic level is kinetic Monte Carlo (KMC). A problem with this method is that the CPU time increases exponentially with the growth temperature, making simulations exceedingly expensive. An analysis of typical KMC runs showed two characteristic time scales: t_ad which is the characteristic time for an adatom jump and t_surf which is the characteristic time before the surface morphology changes. We have developed a new method, called adatom density KMC (AD-KMC), which eliminates the fast time scale t_ad. This is achieved by directly calculating adatom ditribution rather than to follow explicitly the trajectory of each adatom like in KMC. Statistical checks were done on AD-KMC to test the method. The density of islands and the island-size distribution as function of temperature and flux showed an excellent agreement with KMC results and nucleation theory. Finally, we apply the method to study complex systems such as self-organization in V-grooves and lateral overgrowth.
An energy transfer method for 4D Monte Carlo dose calculation
Siebers, Jeffrey V.; Zhong, Hualiang
2008-01-01
This article presents a new method for four-dimensional Monte Carlo dose calculations which properly addresses dose mapping for deforming anatomy. The method, called the energy transfer method (ETM), separates the particle transport and particle scoring geometries: Particle transport takes place in the typical rectilinear coordinate system of the source image, while energy deposition scoring takes place in a desired reference image via use of deformable image registration. Dose is the energy deposited per unit mass in the reference image. ETM has been implemented into DOSXYZnrc and compared with a conventional dose interpolation method (DIM) on deformable phantoms. For voxels whose contents merge in the deforming phantom, the doses calculated by ETM are exactly the same as an analytical solution, contrasting to the DIM which has an average 1.1% dose discrepancy in the beam direction with a maximum error of 24.9% found in the penumbra of a 6 MV beam. The DIM error observed persists even if voxel subdivision is used. The ETM is computationally efficient and will be useful for 4D dose addition and benchmarking alternative 4D dose addition algorithms. PMID:18841862
Cullen, D.E.; Perkins, S.T.; Plechaty, E.F.; Rathkopf, J.A.
1988-06-01
At the present time a Monte Carlo transport computer code is being designed and implemented at Lawrence Livermore National Laboratory to include the transport of: neutrons, photons, electrons and light charged particles as well as the coupling between all species of particles, e.g., photon induced electron emission. Since this code is being designed to handle all particles this approach is called the ''All Particle Method''. The code is designed as a test bed code to include as many different methods as possible (e.g., electron single or multiple scattering) and will be data driven to minimize the number of methods and models ''hard wired'' into the code. This approach will allow changes in the Livermore nuclear and atomic data bases, used to described the interaction and production of particles, to be used to directly control the execution of the program. In addition this approach will allow the code to be used at various levels of complexity to balance computer running time against the accuracy requirements of specific applications. This paper describes the current design philosophy and status of the code. Since the treatment of neutrons and photons used by the All Particle Method code is more or less conventional, emphasis in this paper is placed on the treatment of electron, and to a lesser degree charged particle, transport. An example is presented in order to illustrate an application in which the ability to accurately transport electrons is important. 21 refs., 1 fig.
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.
Exploration of the use of the kinetic Monte Carlo method in simulation of quantum dot growth
NASA Astrophysics Data System (ADS)
Ramsey, James J.
2011-12-01
The use of Kinetic Monte Carlo (KMC) simulations in modeling growth of quantum dots (QDs) on semiconductor surfaces is explored. The underlying theory of the KMC method and the algorithms used in KMC implementations are explained, and the merits and shortcomings of previous KMC simulations on QD growth are discussed. Exploratory research has determined that on the one hand, quantitative KMC simulation of InAs/GaAs QD growth would need to be off-lattice, but that on the other hand, the available empirical interatomic potentials needed to make such off-lattice simulation tractable are not reliable for modeling semiconductor surfaces. A qualitative Kinetic Monte Carlo model is then developed for QD growth on a (001) substrate of tetrahedrally coordinated semiconductor material. It takes into account three different kinds of anisotropy: elastic anisotropy of the substrate, anisotropy in diffusion of isolated particles deposited onto the substrate (or single-particle diffusional anisotropy), and anisotropy in the interactions amongst nearest-neighboring deposited particles. Elastic effects are taken into account through a phenomenological repulsive ring model. The results of the qualitative simulation are as follows: (1) Effects of elastic anisotropy appear more pronounced in some experiments than others, with an anisotropic model needed to reproduce the order seen in some experimental results, while an isotropic model better explains the results from other experiments. (2) The single-particle diffusional anisotropy appears to explain the disorder in arrangement of quantum dots that has been seen in several experiments. (3) Anisotropy in interactions among nearest-neighboring particles appears to explain the oblong shapes of quantum dots seen in experiments of growth of InGaAs dots on GaAs(001), and to partially explain the presence of chains of dots as well. It is concluded that while the prospects of quantitative KMC simulations of quantum dot growth face difficulties, qualitative KMC simulations can lend some physical insights and lead to new questions that may be addressed by future research.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
Calculation of the Entropy of random coil polymers with the hypothetical scanning Monte Carlo Method
White, Ronald P.; Meirovitch, Hagai
2006-01-01
Hypothetical scanning Monte Carlo (HSMC) is a method for calculating the absolute entropy, S and free energy, F from a given MC trajectory developed recently and applied to liquid argon, TIP3P water and peptides. In this paper HSMC is extended to random coil polymers by applying it to self-avoiding walks on a square lattice a simple but difficult model due to strong excluded volume interactions. With HSMC the probability of a given chain is obtained as a product of transition probabilities calculated for each bond by MC simulations and a counting formula. This probability is exact in the sense that it is based on all the interactions of the system and the only approximation is due to finite sampling. The method provides rigorous upper and lower bounds for F, which can be obtained from a very small sample and even from a single chain conformation. HSMC is independent of existing techniques and thus constitutes an independent research tool. The HSMC results are compared to those obtained by other methods, and its application to complex lattice chain models is discussed; we emphasize its ability to treat any type of boundary conditions for which a reference state (with known free energy) might be difficult to define for a thermodynamic integration process. Finally, we stress that the capability of HSMC to extract the absolute entropy from a given sample is important for studying relaxation processes, such as protein folding. PMID:16356071
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Bishop, Joseph E.; Strack, O. E.
2011-03-22
A novel method is presented for assessing the convergence of a sequence of statistical distributions generated by direct Monte Carlo sampling. The primary application is to assess the mesh or grid convergence, and possibly divergence, of stochastic outputs from non-linear continuum systems. Example systems include those from fluid or solid mechanics, particularly those with instabilities and sensitive dependence on initial conditions or system parameters. The convergence assessment is based on demonstrating empirically that a sequence of cumulative distribution functions converges in the Linfty norm. The effect of finite sample sizes is quantified using confidence levels from the Kolmogorov–Smirnov statistic. The statistical method is independent of the underlying distributions. The statistical method is demonstrated using two examples: (1) the logistic map in the chaotic regime, and (2) a fragmenting ductile ring modeled with an explicit-dynamics finite element code. In the fragmenting ring example the convergence of the distribution describing neck spacing is investigated. The initial yield strength is treated as a random field. Two different random fields are considered, one with spatial correlation and the other without. Both cases converged, albeit to different distributions. The case with spatial correlation exhibited a significantly higher convergence rate compared with the one without spatial correlation.
Uncertainty quantification through the Monte Carlo method in a cloud computing setting
NASA Astrophysics Data System (ADS)
Cunha, Americo; Nasser, Rafael; Sampaio, Rubens; Lopes, Hlio; Breitman, Karin
2014-05-01
The Monte Carlo (MC) method is the most common technique used for uncertainty quantification, due to its simplicity and good statistical results. However, its computational cost is extremely high, and, in many cases, prohibitive. Fortunately, the MC algorithm is easily parallelizable, which allows its use in simulations where the computation of a single realization is very costly. This work presents a methodology for the parallelization of the MC method, in the context of cloud computing. This strategy is based on the MapReduce paradigm, and allows an efficient distribution of tasks in the cloud. This methodology is illustrated on a problem of structural dynamics that is subject to uncertainties. The results show that the technique is capable of producing good results concerning statistical moments of low order. It is shown that even a simple problem may require many realizations for convergence of histograms, which makes the cloud computing strategy very attractive (due to its high scalability capacity and low-cost). Additionally, the results regarding the time of processing and storage space usage allow one to qualify this new methodology as a solution for simulations that require a number of MC realizations beyond the standard.
NASA Astrophysics Data System (ADS)
Zhang, Jun; Guo, Fan
2015-08-01
Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
NASA Astrophysics Data System (ADS)
Lyubartsev, Alexander P.; Nam, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-01
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
Cu-Au Alloys Using Monte Carlo Simulations and the BFS Method for Alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Good, Brian; Ferrante, John
1996-01-01
Semi empirical methods have shown considerable promise in aiding in the calculation of many properties of materials. Materials used in engineering applications have defects that occur for various reasons including processing. In this work we present the first application of the BFS method for alloys to describe some aspects of microstructure due to processing for the Cu-Au system (Cu-Au, CuAu3, and Cu3Au). We use finite temperature Monte Carlo calculations, in order to show the influence of 'heat treatment' in the low-temperature phase of the alloy. Although relatively simple, it has enough features that could be used as a first test of the reliability of the technique. The main questions to be answered in this work relate to the existence of low temperature ordered structures for specific concentrations, for example, the ability to distinguish between rather similar phases for equiatomic alloys (CuAu I and CuAu II, the latter characterized by an antiphase boundary separating two identical phases).
NASA Astrophysics Data System (ADS)
Goto, Makoto; Kondoh, Yoshiomi
1998-01-01
A self-consistent Monte Carlo modelling technique has been developed to study normal and abnormal glow discharge plasmas. To simulate nonequilibrium particles, a limited weight probability method is introduced and a fine subslab system is used. These two methods are applied to a DC Ar-like gas discharge simulation. The simulations are performed for conditions corresponding to the experimental voltage and current sets of normal and abnormal glow disharges. The characteristic spatial profiles of plasmas for normal and abnormal glow discharges with high nonequilibrium electron energy distributions are obtained. The increase in the current and the voltage from the normal glow leads to the following: (1) the density peak of the ions rises in the cathode region, (2) the density peak of electrons rises and catches up with that of ions and the peak position occurs closer to the cathode simultaneously; instead of a small increase of plasma density in the bulk plasma region, (3) reversal field strength next to the cathode fall increases and (4) the two groups of the enregy distribution separates into three groups at the cathode fall edge.
NASA Astrophysics Data System (ADS)
Zhang, Jun; Guo, Fan
2015-11-01
Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.
NASA Astrophysics Data System (ADS)
Ido, Kota; Ohgoe, Takahiro; Imada, Masatoshi
2015-12-01
We develop a time-dependent variational Monte Carlo (t-VMC) method for quantum dynamics of strongly correlated electrons. The t-VMC method has been recently applied to bosonic systems and quantum spin systems. Here we propose a time-dependent trial wave function with many variational parameters, which is suitable for nonequilibrium strongly correlated electron systems. As the trial state, we adopt the generalized pair-product wave function with correlation factors and quantum-number projections. This trial wave function has been proven to accurately describe ground states of strongly correlated electron systems. To show the accuracy and efficiency of our trial wave function in nonequilibrium states as well, we present our benchmark results for relaxation dynamics during and after interaction quench protocols of fermionic Hubbard models. We find that our trial wave function well reproduces the exact results for the time evolution of physical quantities such as energy, momentum distribution, spin structure factor, and superconducting correlations. These results show that the t-VMC with our trial wave function offers an efficient and accurate way to study challenging problems of nonequilibrium dynamics in strongly correlated electron systems.
Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods
Godfrey, Andrew T; Gehin, Jess C; Bekar, Kursat B; Celik, Cihangir
2014-01-01
The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.
Asteroid orbital inversion using a virtual-observation Markov-chain Monte Carlo method
NASA Astrophysics Data System (ADS)
Muinonen, Karri; Granvik, Mikael; Oszkiewicz, Dagmara; Pieniluoma, Tuomo; Pentikinen, Hanna
2012-12-01
A novel virtual-observation Markov-chain Monte Carlo method (MCMC) is presented for the asteroid orbital inverse problem posed by small to moderate numbers of astrometric observations. In the method, the orbital-element proposal probability density is chosen to mimic the convolution of the a posteriori density by itself: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, least-squares orbital elements are derived for the virtual observations using the Nelder-Mead downhill simplex method; third, repeating the procedure gives a difference between two sets of what can be called virtual least-squares elements; and, fourth, the difference obtained constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal density. In practice, the proposals are based on a large number of pre-computed sets of orbital elements. Virtual-observation MCMC is thus based on the characterization of the phase-space volume of solutions before the actual MCMC sampling. Virtual-observation MCMC is compared to MCMC orbital ranging, a random-walk Metropolis-Hastings algorithm based on sampling with the help of Cartesian positions at two observation dates, in the case of the near-Earth asteroid (85640) 1998 OX4. In the present preliminary comparison, the methods yield similar results for a 9.1-day observational time interval extracted from the full current astrometry of the asteroid. In the future, both of the methods are to be applied to the astrometric observations of the Gaia mission.
Favorite, J.A.
1999-09-01
In previous work, exponential convergence of Monte Carlo solutions using the reduced source method with Legendre expansion has been achieved only in one-dimensional rod and slab geometries. In this paper, the method is applied to three-dimensional (right parallelepiped) problems, with resulting evidence suggesting success. As implemented in this paper, the method approximates an angular integral of the flux with a discrete-ordinates numerical quadrature. It is possible that this approximation introduces an inconsistency that must be addressed.
Yamamoto, K.; Hashizume, K.; Wada, T.; Ohta, M.; Suda, T.; Nishimura, T.; Fujimoto, M. Y.; Kato, K.; Aikawa, M.
2006-07-12
We propose a Monte Carlo method to study the reaction paths in nucleosynthesis during stellar evolution. Determination of reaction paths is important to obtain the physical picture of stellar evolution. The combination of network calculation and our method gives us a better understanding of physical picture. We apply our method to the case of the helium shell flash model in the extremely metal poor star.
Da, B.; Sun, Y.; Ding, Z. J.; Mao, S. F.; Zhang, Z. M.; Jin, H.; Yoshikawa, H.; Tanuma, S.
2013-06-07
A reverse Monte Carlo (RMC) method is developed to obtain the energy loss function (ELF) and optical constants from a measured reflection electron energy-loss spectroscopy (REELS) spectrum by an iterative Monte Carlo (MC) simulation procedure. The method combines the simulated annealing method, i.e., a Markov chain Monte Carlo (MCMC) sampling of oscillator parameters, surface and bulk excitation weighting factors, and band gap energy, with a conventional MC simulation of electron interaction with solids, which acts as a single step of MCMC sampling in this RMC method. To examine the reliability of this method, we have verified that the output data of the dielectric function are essentially independent of the initial values of the trial parameters, which is a basic property of a MCMC method. The optical constants derived for SiO{sub 2} in the energy loss range of 8-90 eV are in good agreement with other available data, and relevant bulk ELFs are checked by oscillator strength-sum and perfect-screening-sum rules. Our results show that the dielectric function can be obtained by the RMC method even with a wide range of initial trial parameters. The RMC method is thus a general and effective method for determining the optical properties of solids from REELS measurements.
Quantifying uncertainties in pollutant mapping studies using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Tan, Yi; Robinson, Allen L.; Presto, Albert A.
2014-12-01
Routine air monitoring provides accurate measurements of annual average concentrations of air pollutants, but the low density of monitoring sites limits its capability in capturing intra-urban variation. Pollutant mapping studies measure air pollutants at a large number of sites during short periods. However, their short duration can cause substantial uncertainty in reproducing annual mean concentrations. In order to quantify this uncertainty for existing sampling strategies and investigate methods to improve future studies, we conducted Monte Carlo experiments with nationwide monitoring data from the EPA Air Quality System. Typical fixed sampling designs have much larger uncertainties than previously assumed, and produce accurate estimates of annual average pollution concentrations approximately 80% of the time. Mobile sampling has difficulties in estimating long-term exposures for individual sites, but performs better for site groups. The accuracy and the precision of a given design decrease when data variation increases, indicating challenges in sites intermittently impact by local sources such as traffic. Correcting measurements with reference sites does not completely remove the uncertainty associated with short duration sampling. Using reference sites with the addition method can better account for temporal variations than the multiplication method. We propose feasible methods for future mapping studies to reduce uncertainties in estimating annual mean concentrations. Future fixed sampling studies should conduct two separate 1-week long sampling periods in all 4 seasons. Mobile sampling studies should estimate annual mean concentrations for exposure groups with five or more sites. Fixed and mobile sampling designs have comparable probabilities in ordering two sites, so they may have similar capabilities in predicting pollutant spatial variations. Simulated sampling designs have large uncertainties in reproducing seasonal and diurnal variations at individual sites, but are capable to predict these variations for exposure groups.
Constant-pH Hybrid Nonequilibrium Molecular Dynamics-Monte Carlo Simulation Method.
Chen, Yunjie; Roux, Benot
2015-08-11
A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys. 2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD-MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD-MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems. PMID:26300709
On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems
Walsh, Jon
2015-08-31
The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.
Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations
Bush, K.; Gagne, I. M.; Zavgorodni, S.; Ansbacher, W.; Beckham, W.
2011-04-15
Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm{sup 2}) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within {+-}1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within {+-}2.0%({sigma}{sub MC}=0.8%) in lung ({rho}=0.24 g cm{sup -3}) and within {+-}2.9%({sigma}{sub MC}=0.8%) in low-density lung ({rho}=0.1 g cm{sup -3}). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity ({rho}=0.001 g cm{sup -3}) were found to be within the range of {+-}1.5% to {+-}4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark ({sigma}{sub MC}=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within {+-}2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is capable of modeling radiotherapy dose deposition with accuracy only previously achievable with Monte Carlo techniques.
HRMC_1.1: Hybrid Reverse Monte Carlo method with silicon and carbon potentials
NASA Astrophysics Data System (ADS)
Opletal, G.; Petersen, T. C.; O'Malley, B.; Snook, I. K.; McCulloch, D. G.; Yarovsky, I.
2011-02-01
The Hybrid Reverse Monte Carlo (HRMC) code models the atomic structure of materials via the use of a combination of constraints including experimental diffraction data and an empirical energy potential. This energy constraint is in the form of either the Environment Dependent Interatomic Potential (EDIP) for carbon and silicon and the original and modified Stillinger-Weber potentials applicable to silicon. In this version, an update is made to correct an error in the EDIP carbon energy calculation routine. New version program summaryProgram title: HRMC version 1.1 Catalogue identifier: AEAO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 36 991 No. of bytes in distributed program, including test data, etc.: 907 800 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: Any computer capable of running executables produced by the g77 Fortran compiler. Operating system: Unix, Windows RAM: Depends on the type of empirical potential use, number of atoms and which constraints are employed. Classification: 7.7 Catalogue identifier of previous version: AEAO_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 777 Does the new version supersede the previous version?: Yes Nature of problem: Atomic modelling using empirical potentials and experimental data. Solution method: Monte Carlo Reasons for new version: An error in a term associated with the calculation of energies using the EDIP carbon potential which results in incorrect energies. Summary of revisions: Fix to correct brackets in the two body part of the EDIP carbon potential routine. Additional comments: The code is not standard FORTRAN 77 but includes some additional features and therefore generates errors when compiled using the Nag95 compiler. It does compile successfully with the GNU g77 compiler ( http://www.gnu.org/software/fortran/fortran.html). Running time: Depends on the type of empirical potential use, number of atoms and which constraints are employed. The test included in the distribution took 37 minutes on a DEC Alpha PC.
NASA Astrophysics Data System (ADS)
Yamamoto, Alexandre Y.; Oliveira, Aurenice M.; Lima, Ivan T.
2014-05-01
The numerical accuracy of the results obtained using the multicanonical Monte Carlo (MMC) algorithm is strongly dependent on the choice of the step size, which is the range of the MMC perturbation from one sample to the next. The proper choice of the MMC step size leads to much faster statistical convergence of the algorithm for the calculation of rare events. One relevant application of this method is the calculation of the probability of the bins in the tail of the discretized probability density function of the differential group delay between the principal states of polarization due to polarization mode dispersion. We observed that the optimum MMC performance is strongly correlated with the inflection point of the actual transition rate from one bin to the next. We also observed that the optimum step size does not correspond to any specific value of the acceptance rate of the transitions in MMC. The results of this study can be applied to the improvement of the performance of MMC applied to the calculation of other rare events of interest in optical communications, such as the bit error ratio and pattern dependence in optical fiber systems with coherent receivers.
Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods
Narita, Y.; Eberl, S.; Nakamura, T.
1996-12-31
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for {sup 99m}Tc and {sup 201}Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for {sup 99m}Tc with TDCS and TEW, respectively. For {sup 201}Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.
Development of a software package for solid-angle calculations using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Zhang, Jie; Chen, Xiulian; Zhang, Changsheng; Li, Gang; Xu, Jiayun; Sun, Guangai
2014-02-01
Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4.
Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Kramer, Richard
2011-08-01
Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.
Numerical simulation of pulsed neutron induced gamma log using Monte Carlo method
NASA Astrophysics Data System (ADS)
Byeongho, Byeongho; Hwang, Seho; Shin, Jehyun; Park, Chang Je; Kim, Jongman; Kim, Ki-Seog
2015-04-01
Recently the neutron induced gamma log is the key role in shale play. This study was performed for understanding an energy characteristics spectrum of neutron induced gamma log using Monte Carlo method. A neutron generator which emits 14 MeV neutron particles was used. Flux of thermal neutron and capture gamma was calculated from detectors arranged at 10 cm intervals from neutron generator. Sandstone, limestone, granite, and basalt were selected to estimate and simulate response characteristics using MCNP. Also, the design for reducing effects of natural gamma (K, Th U) and back scattering was also applied to the sonde model in MCNP. Through results of energy spectrum analysis of capture gamma which detected to the detector in numerical sonde model, we knew that atoms which have wide neutron cross-section and are abundant in formation such as calcium, iron, silicon, magnesium, aluminium, hydrogen, and so forth were detected. Those results can help to design the optimal array of neutron and capture gamma detectors.
Simulation of aggregating particles in complex flows by the lattice kinetic Monte Carlo method
NASA Astrophysics Data System (ADS)
Flamm, Matthew H.; Sinno, Talid; Diamond, Scott L.
2011-01-01
We develop and validate an efficient lattice kinetic Monte Carlo (LKMC) method for simulating particle aggregation in laminar flows with spatially varying shear rate, such as parabolic flow or flows with standing vortices. A contact time model was developed to describe the particle-particle collision efficiency as a function of the local shear rate, G, and approach angle, ?. This model effectively accounts for the hydrodynamic interactions between approaching particles, which is not explicitly considered in the LKMC framework. For imperfect collisions, the derived collision efficiency [\\varepsilon = 1 - int_0^{{? {? /2} {sin ? exp ( { - 2\\cot ? {{? _{agg} }/ { ? _{agg} } G} )} d?] was found to depend only on ?agg/G, where ?agg is the specified aggregation rate. For aggregating platelets in tube flow, ? _{agg} = 0.683 s-1 predicts the experimentally measured ? across a physiological range (G = 40-1000 s-1) and is consistent with ?2b?3-fibrinogen bond dynamics. Aggregation in parabolic flow resulted in the largest aggregates forming near the wall where shear rate and residence time were maximal, however intermediate regions between the wall and the center exhibited the highest aggregation rate due to depletion of reactants nearest the wall. Then, motivated by stenotic or valvular flows, we employed the LKMC simulation developed here for baffled geometries that exhibit regions of squeezing flow and standing recirculation zones. In these calculations, the largest aggregates were formed within the vortices (maximal residence time), while squeezing flow regions corresponded to zones of highest aggregation rate.
Assessment of the Contrast to Noise Ratio in PET Scanners with Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess the contrast to noise ratio (CNR) of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The PET scanner simulated was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution. Image quality was assessed in terms of the CNR. CNR was estimated from coronal reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL. OSMAPOSL reconstruction was assessed by using various subsets (3, 15 and 21) and various iterations (2 to 20). CNR values were found to decrease when both iterations and subsets increase. Two (2) iterations were found to be optimal. The simulated PET evaluation method, based on the TLC plane source, can be useful in image quality assessment of PET scanners.
Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics
Hall, Howard L
2012-01-01
Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.
IR imaging simulation and analysis for aeroengine exhaust system based on reverse Monte Carlo method
NASA Astrophysics Data System (ADS)
Chen, Shiguo; Chen, Lihai; Mo, Dongla; Shi, Jingcheng
2014-11-01
The IR radiation characteristics of aeroengine are the important basis for IR stealth design and anti-stealth detection of aircraft. With the development of IR imaging sensor technology, the importance of aircraft IR stealth increases. An effort is presented to explore target IR radiation imaging simulation based on Reverse Monte Carlo Method (RMCM), which combined with the commercial CFD software. Flow and IR radiation characteristics of an aeroengine exhaust system are investigated, which developing a full size geometry model based on the actual parameters, using a flow-IR integration structured mesh, obtaining the engine performance parameters as the inlet boundary conditions of mixer section, and constructing a numerical simulation model of engine exhaust system of IR radiation characteristics based on RMCM. With the above models, IR radiation characteristics of aeroengine exhaust system is given, and focuses on the typical detecting band of IR spectral radiance imaging at azimuth 20. The result shows that: (1) in small azimuth angle, the IR radiation is mainly from the center cone of all hot parts; near the azimuth 15, mixer has the biggest radiation contribution, while center cone, turbine and flame stabilizer equivalent; (2) the main radiation components and space distribution in different spectrum is different, CO2 at 4.18, 4.33 and 4.45 micron absorption and emission obviously, H2O at 3.0 and 5.0 micron absorption and emission obviously.
Improving Bayesian analysis for LISA Pathfinder using an efficient Markov Chain Monte Carlo method
NASA Astrophysics Data System (ADS)
Ferraioli, Luigi; Porter, Edward K.; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Gibert, Ferran; Hewitson, Martin; Hueller, Mauro; Karnesis, Nikolaos; Korsakova, Natalia; Nofrarias, Miquel; Plagnol, Eric; Vitale, Stefano
2014-02-01
We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of the LISA Pathfinder satellite. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to LISA Pathfinder data. For this experiment, we return parameter values that are all within 1 ? of the injected values. When we analyse the accuracy of our parameter estimation in terms of the effect they have on the force-per-unit of mass noise, we find that the induced errors are three orders of magnitude less than the expected experimental uncertainty in the power spectral density.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y.; Yun, S.
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Spaceborne imaging simulation of ship based on Monte Carlo ray tracing method
NASA Astrophysics Data System (ADS)
Wang, Biao; He, Hong-fei; Lin, Jia-xuan
2014-11-01
To demonstrate image quality and sensor's performance for target detection before the satellite launched, it is necessary to establish an end-to-end model that express the detection probability in terms of atmosphere effects, the sensor, and optical scattering properties of target. It is difficult to develop an accurate 3D radiation transfer model for scene including complex target, especially for large scale scene. It is beneficial to process separately the target and large scale background. Radiance from sea background can be solved exactly with atmospheric-ocean coupling radiation transfer model. However for ship target, it is only but sufficient to using the sample model. In the model the illuminated light is separate into direct sunlight and sky light, and the sensor received radiance is radiance scatted from target and attenuated by atmosphere. High spatial/spectral resolution image simulated with Monte Carlo ray tracing method is used as input for modeling space-borne imagery, which is economic for demonstrating sensor's performance at different conditions and multiple scattering can also be considered. Bidirectional reflectance distribution function (BRDF) is introduced to characterize the light scattering model of the ship sample material.
In vivo simulation environment for fluorescence molecular tomography using Monte Carlo method
NASA Astrophysics Data System (ADS)
Zhang, Yizhai; Xu, Qiong; Li, Jin; Tang, Shaojie; Zhang, Xin
2008-12-01
Optical sensing of specific molecular target using near-infrared light has been recognized to be the crucial technology, have changing human's future. The imaging of Fluorescence Molecular Tomography is the most novel technology in optical sensing. It uses near-infrared light(600-900nm) as instrument and utilize fluorochrome as probe to take noncontact three-dimensional imaging for live molecular targets and to exhibit molecular process in vivo. In order to solve the problem of forward simulation in FMT, this paper mainly introduces a new simulation modeling. The modeling utilizes Monte Carlo method and is implemented in C++ programming language. Ultimately its accuracy has been testified by comparing with analytic solutions and MOSE from University of Iowa and Chinese Academy of Science. The main characters of the modeling are that it can simulate both of bioluminescent imaging and FMT and take analytic calculation and support more than one source and CCD detector simultaneously. It can generate sufficient and proper data and pre-preparation for the study of fluorescence molecular tomography.
Velazquez, L; Castro-Palacio, J C
2015-03-01
Velazquez and Curilef [J. Stat. Mech. (2010); J. Stat. Mech. (2010)] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989)]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L)∝(1/L)z with exponent z≃0.26±0.02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L→+∞. PMID:25871247
Monte carlo method-based QSAR modeling of penicillins binding to human serum proteins.
Veselinovi?, Jovana B; Toropov, Andrey A; Toropova, Alla P; Nikoli?, Goran M; Veselinovi?, Aleksandar M
2015-01-01
The binding of penicillins to human serum proteins was modeled with optimal descriptors based on the Simplified Molecular Input-Line Entry System (SMILES). The concentrations of protein-bound drug for 87 penicillins expressed as percentage of the total plasma concentration were used as experimental data. The Monte Carlo method was used as a computational tool to build up the quantitative structure-activity relationship (QSAR) model for penicillins binding to plasma proteins. One random data split into training, test and validation set was examined. The calculated QSAR model had the following statistical parameters: r(2) ?=?0.8760, q(2) ?=?0.8665, s?=?8.94 for the training set and r(2) ?=?0.9812, q(2) ?=?0.9753, s?=?7.31 for the test set. For the validation set, the statistical parameters were r(2) ?=?0.727 and s?=?12.52, but after removing the three worst outliers, the statistical parameters improved to r(2) ?=?0.921 and s?=?7.18. SMILES-based molecular fragments (structural indicators) responsible for the increase and decrease of penicillins binding to plasma proteins were identified. The possibility of using these results for the computer-aided design of new penicillins with desired binding properties is presented. PMID:25408278
Applications of Monte Carlo methods for the analysis of MHTGR case of the VHTRC benchmark
Difilippo, F.C.
1994-03-01
Monte Carlo methods, as implemented in the MCNP code, have been used to analyze the neutronics characteristics of benchmarks related to Modular High Temperature Gas-Cooled Reactors. The benchmarks are idealized versions of the Japanese (VHTRC) and Swiss (PROTEUS) facilities and an actual configuration of the PROTEUS Configuration 1 experiment. The purpose of the unit cell benchmarks is to compare multiplication constants, critical bucklings, migration lengths, reaction rates and spectral indices. The purpose of the full reactors benchmarks is to compare multiplication constants, reaction rates, spectral indices, neutron balances, reaction rates profiles, temperature coefficients of reactivity and effective delayed neutron fractions. All of these parameters can be calculated by MCNP, which can provide a very detailed model of the geometry of the configurations, from fuel particles to entire fuel assemblies, using at the same time a continuous energy model. These characteristics make MCNP a very useful tool to analyze these MHTGR benchmarks. The author has used the MCNP latest version, 4.x, eld = 01/12/93 with an ENDF/B-V cross section library. This library does not yet contain temperature dependent resonance materials, so all calculations correspond to room temperature, T = 300{degrees}K. Two separate reports were made -- one for the VHTRC, the other for the PROTEUS benchmark.
Applications of Monte Carlo methods for the analysis of MHTGR case of the PROTEUS benchmark
Difilippo, F.C.
1994-04-01
Monte Carlo methods, as implemented in the MCNP code, have been used to analyze the neutronics characteristics of benchmarks related to Modular High Temperature Gas-Cooled Reactors. The benchmarks are idealized versions of the Japanes (VHTRC) and Swiss (PROTEUS) facilities and an actual configurations of the PROTEUS Configuration I experiment. The purpose of the unit cell benchmarks is to compare multiplication constants, critical bucklings, migration lengths, reaction rates and spectral indices. The purpose of the full reactors benchmarks is to compare multiplication constants, reaction rates, spectral indices, neutron balances, reaction rates profiles, temperature coefficients of reactivity and effective delayed neutron fractions. All of these parameters can be calculated by MCNP, which can provide a very detailed model of the geometry of the configurations, from fuel particles to entire fuel assemblies, using at the same time a continuous energy model. These characteristics make MCNP a very useful tool to analyze these MHTGR benchmarks. We have used the MCNP latest version, 4.x, eld = 01/12/93 with an ENDF/B-V cross section library. This library does not yet contain temperature dependent resonance materials, so all calculations correspond to room temperature, T = 300{degree}K. Two separate reports were made -- one for the VHTRC, the other for the PROTEUS benchmark.
NASA Astrophysics Data System (ADS)
Agudelo-Giraldo, J. D.; Restrepo-Parra, E.; Restrepo, J.
2015-10-01
The Metropolis algorithm and the classical Heisenberg approximation were implemented by the Monte Carlo method to design a computational approach to the magnetization and resistivity of La2/3Ca1/3MnO3, which depends on the Mn ion vacancies as the external magnetic field increases. This compound is ferromagnetic, and it exhibits the colossal magnetoresistance (CMR) effect. The monolayer was built with LLd dimensions, and it had L=30 umc (units of magnetic cells) for its dimension in the x-y plane and was d=12 umc in thickness. The Hamiltonian that was used contains interactions between first neighbors, the magnetocrystalline anisotropy effect and the external applied magnetic field response. The system that was considered contains mixed-valence bonds: Mn3+eg'-O-Mn3+eg, Mn3+eg-O-Mn4+d3 and Mn3+eg'-O-Mn4+d3. The vacancies were placed randomly in the sample, replacing any type of Mn ion. The main result shows that without vacancies, the transitions TC (Curie temperature) and TMI (metal-insulator temperature) are similar, whereas with the increase in the vacancy percentage, TMI presented lower values than TC. This situation is caused by the competition between the external magnetic field, the vacancy percentage and the magnetocrystalline anisotropy, which favors the magnetoresistive effect at temperatures below TMI. Resistivity loops were also observed, which shows a direct correlation with the hysteresis loops of magnetization at temperatures below TC.
Markov chain Monte Carlo methods for assigning larvae to natal sites using natural geochemical tags.
White, J Wilson; Standish, Julie D; Thorrold, Simon R; Warner, Robert R
2008-12-01
Geochemical signatures deposited in otoliths are a potentially powerful means of identifying the origin and dispersal history of fish. However, current analytical methods for assigning natal origins of fish in mixed-stock analyses require knowledge of the number of potential sources and their characteristic geochemical signatures. Such baseline data are difficult or impossible to obtain for many species. A new approach to this problem can be found in iterative Markov Chain Monte Carlo (MCMC) algorithms that simultaneously estimate population parameters and assign individuals to groups. MCMC procedures only require an estimate of the number of source populations, and post hoc model selection based on the deviance information criterion can be used to infer the correct number of chemically distinct sources. We describe the basics of the MCMC approach and outline the specific decisions required when implementing the technique with otolith geochemical data. We also illustrate the use of the MCMC approach on simulated data and empirical geochemical signatures in otoliths from young-of-the-year and adult weakfish, Cynoscion regalis, from the U.S. Atlantic coast. While we describe how investigators can use MCMC to complement existing analytical tools for use with otolith geochemical data, the MCMC approach is suitable for any mixed-stock problem with a continuous, multivariate data. PMID:19263887
Calculation of ? quanta passage through substance with Monte-Carlo method at x-ray images simulation
NASA Astrophysics Data System (ADS)
Boriskov, G. V.; Bykov, A. I.; Volodko, A. R.; Egorov, N. I.; Pavlov, V. N.; Ronzhin, A. B.
2008-07-01
Software, developed in RFNC-VNIIEF for x-ray images simulation using Monte-Carlo method is described. The software is a part of an x-ray method used for investigation of an equation of state (in this case hydrogen isotopes: protium and deuterium) in a megabar pressure range. Interaction of ?-quanta with a substance is considered. Effect of a scattered radiation on the x-ray images formation is estimated.
[Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].
Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao
2015-05-01
Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 C. The boiling range of kerosene is 140~250 C. And the boiling range of diesel is 180~370 C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are selected for characteristic peak power integration to determine components content of mineral oil mixture of gasoline, kerosene and diesel by optimal algorithm. Compared with single point measurement of peak method and mean method, measurement sensitivity is improved about 50 times. The implementation of high precision measurement of mixture components content of gasoline, kerosene and diesel provides a practical algorithm for components content direct determination of spectra overlapping mixture without chemical separation. PMID:26415451
NASA Astrophysics Data System (ADS)
Massoudieh, A.; Sharifi, S.; Solomon, K.
2012-12-01
The estimation of groundwater age has received increasing attention due to its applications in assessing the sustainability of water withdrawal from the aquifers and evaluating the vulnerability of groundwater resources to near surface or recharge water contamination. In most of the works done in the past, whether a single or multiple tracers used for groundwater dating, the uncertainties in observed concentrations of the tracers and their decay rate constants have been neglected. Furthermore, tracers have been assumed to move at the same speed as the groundwater. In reality some of the radio-tracers or anthropogenic chemicals used for groundwater dating might undergo adsorption and desorption and move with a slower velocity than the groundwater. Also there are uncertainties in the decay rates of synthetic chemicals such as CFCs commonly used for groundwater dating. In this presentation development of a Bayesian modeling approach using Markov Chain Monte Carlo method for estimation of age distribution is described. The model considers the uncertainties in the measured tracer concentrations as well as the parameters affecting the concentration of tracers in the groundwater and provides the frequency distributions of the parameters defining the groundwater age distribution. The model also incorporates the effect of the contribution of dissolution of aquifer minerals in diluting the 14C signature and the uncertainties associated with this process on inferred age distribution parameters. The results of application of the method to data collected at Laselva Biological Station - Costa Rica will also be presented. In this demonstration application, eight different forms of presumed groundwater age distributions have been tested including four single-peak forms and four double-peaked forms assuming the groundwater consisting distinct young and old fractions. The performance of these presumed groundwater age forms have been evaluated in terms of their capability in predicting tracer concentration close to the observed values and also the level of certainty they provide in estimation of the age-distribution of parameters. The schematic of the hypothetical 2D (vertical) aquifer model
NASA Astrophysics Data System (ADS)
Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan
2015-10-01
Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.
Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling
Kraan, Aafke Christine
2015-01-01
Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β+ emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects. PMID:26217586
NASA Astrophysics Data System (ADS)
Ghita, Gabriel M.
Our study aim to design a useful neutron signature characterization device based on 3He detectors, a standard neutron detection methodology used in homeland security applications. Research work involved simulation of the generation, transport, and detection of the leakage radiation from Special Nuclear Materials (SNM). To accomplish research goals, we use a new methodology to fully characterize a standard "1-Ci" Plutonium-Beryllium (Pu-Be) neutron source based on 3-D computational radiation transport methods, employing both deterministic SN and Monte Carlo methodologies. Computational model findings were subsequently validated through experimental measurements. Achieved results allowed us to design, build, and laboratory-test a Nickel composite alloy shield that enables the neutron leakage spectrum from a standard Pu-Be source to be transformed, through neutron scattering interactions in the shield, into a very close approximation of the neutron spectrum leaking from a large, subcritical mass of Weapons Grade Plutonium (WGPu) metal. This source will make possible testing with a nearly exact reproduction of the neutron spectrum from a 6.67 kg WGPu mass equivalent, but without the expense or risk of testing detector components with real materials. Moreover, over thirty moderator materials were studied in order to characterize their neutron energy filtering potential. Specific focus was made to establish the limits of He-3 spectroscopy using ideal filter materials. To demonstrate our methodology, we present the optimally detected spectral differences between SNM materials (Plutonium and Uranium), metal and oxide, using ideal filter materials. Finally, using knowledge gained from previous studies, the design of a He-3 spectroscopy system neutron detector, simulated entirely via computational methods, is proposed to resolve the spectra from SNM neutron sources of high interest. This was accomplished by replacing ideal filters with real materials, and comparing reaction rates with similar data from the ideal material suite.
Monte Carlo particle-in-cell methods for the simulation of the Vlasov-Maxwell gyrokinetic equations
NASA Astrophysics Data System (ADS)
Bottino, A.; Sonnendrücker, E.
2015-10-01
> The particle-in-cell (PIC) algorithm is the most popular method for the discretisation of the general 6D Vlasov-Maxwell problem and it is widely used also for the simulation of the 5D gyrokinetic equations. The method consists of coupling a particle-based algorithm for the Vlasov equation with a grid-based method for the computation of the self-consistent electromagnetic fields. In this review we derive a Monte Carlo PIC finite-element model starting from a gyrokinetic discrete Lagrangian. The variations of the Lagrangian are used to obtain the time-continuous equations of motion for the particles and the finite-element approximation of the field equations. The Noether theorem for the semi-discretised system implies a certain number of conservation properties for the final set of equations. Moreover, the PIC method can be interpreted as a probabilistic Monte Carlo like method, consisting of calculating integrals of the continuous distribution function using a finite set of discrete markers. The nonlinear interactions along with numerical errors introduce random effects after some time. Therefore, the same tools for error analysis and error reduction used in Monte Carlo numerical methods can be applied to PIC simulations.
Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lssl, K.; Aebersold, D. M.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-03-15
Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan led to a similar homogeneity in the PTV compared to the standard treatment plan while the mean body dose was lower for the MERT plan. Regarding the second clinical case, a whole breast treatment, MERT resulted in a reduction of the lung volume receiving more than 45% of the prescribed dose when compared to the standard plan. On the other hand, the MERT plan leads to a larger low-dose lung volume and a degraded dose homogeneity in the PTV. For the clinical cases evaluated in this work, treatment plans using the BolusECT technique resulted in a more homogenous PTV and CTV coverage but higher doses to the OARs than the MERT plans. Conclusions: MERT treatments were successfully planned for phantom and clinical cases, applying a newly developed intuitive and efficient forward planning strategy that employs a MC based electron beam model for pMLC shaped electron beams. It is shown that MERT can lead to a dose reduction in OARs compared to other methods. The process of feathering MERT segments results in an improvement of the dose homogeneity in the PTV.
Beyond weak constraint 4DVAR: a bridge to Monte Carlo methods?
NASA Astrophysics Data System (ADS)
Cornford, Dan; Shen, Yuan; Vrettas, Michael; Opper, Manfred
2010-05-01
Data assimilation is often motivated from a Bayesian perspective, however most implementations introduce approximations based on a very small number of samples (ensemble Kalman filter / smoother) to perform a statistical linearisation of the system model, or seek an approximate mode of the posterior distribution (4DVAR). In statistics the alternative approaches are based on Monte Carlo sampling optimally using particle filters / smoothers or Langevin path sampling, neither of which are likely to scale well enough to be applied to realistic models in the near future. In this work we explain a new approach to data assimilation based on a variational treatment of the posterior distribution over paths. The method can be understood to be similar to a weak constraint 4DVAR where we seek the best approximating posterior distribution over paths rather than simply the most likely path. The method which we call Bayesian 4DVAR is based on the minimisation of the Kullback-Leibler divergence between distributions, and is suited to applications where simple additive model error in present as a random forcing in the system equations. The approximating distribution used is a Gaussian process, described by a time varying linear dynamical system, whose parameters form the control variables for the problem. We will outline how this approach can be seen as an extension to weak constraint 4DVAR, where additionally the posterior covariance is approximated. We illustrate the method in operation on a range of toy examples including Lorenz 40D and Kuramoto-Shivashinsky PDE examples. We compare the approach to ensemble and traditional 4DVAR approaches to data assimilation and show its limitations. A principle limitation is that the method systematically underestimates the marginal (with respect to time) state covariance, although we show empirically that this effect is minor given sufficient observations. We discuss possible extensions based on a mean field approximation that will allow the application of the method to large systems. We also show how a local parametrisation of the time varying state between observations using an orthogonal polynomial basis allows further reduction in the number of parameters that need to be estimated.
Capote, Roberto Smith, Donald L.
2008-12-15
The Unified Monte Carlo method (UMC) has been suggested to avoid certain limitations and approximations inherent to the well-known Generalized Least Squares (GLS) method of nuclear data evaluation. This contribution reports on an investigation of the performance of the UMC method in comparison with the GLS method. This is accomplished by applying both methods to simple examples with few input values that were selected to explore various features of the evaluation process that impact upon the quality of an evaluation. Among the issues explored are: i) convergence of UMC results with the number of Monte Carlo histories and the ranges of sampled values; ii) a comparison of Monte Carlo sampling using the Metropolis scheme and a brute force approach; iii) the effects of large data discrepancies; iv) the effects of large data uncertainties; v) the effects of strong or weak model or experimental data correlations; and vi) the impact of ratio data and integral data. Comparisons are also made of the evaluated results for these examples when the input values are first transformed to comparable logarithmic values prior to performing the evaluation. Some general conclusions that are applicable to more realistic evaluation exercises are offered.
Geometrically-compatible 3-D Monte Carlo and discrete-ordinates methods
Morel, J.E.; Wareing, T.A.; McGhee, J.M.; Evans, T.M.
1998-12-31
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project was two-fold. The first purpose was to develop a deterministic discrete-ordinates neutral-particle transport scheme for unstructured tetrahedral spatial meshes, and implement it in a computer code. The second purpose was to modify the MCNP Monte Carlo radiation transport code to use adjoint solutions from the tetrahedral-mesh discrete-ordinates code to reduce the statistical variance of Monte Carlo solutions via a weight-window approach. The first task has resulted in a deterministic transport code that is much more efficient for modeling complex 3-D geometries than any previously existing deterministic code. The second task has resulted in a powerful new capability for dramatically reducing the cost of difficult 3-D Monte Carlo calculations.
NASA Astrophysics Data System (ADS)
Yesilyurt, Gokhan
Two of the primary challenges associated with the neutronic analysis of the Very High Temperature Reactor (VHTR) are accounting for resonance self-shielding in the particle fuel (contributing to the double heterogeneity) and accounting for temperature feedback due to Doppler broadening. The double heterogeneity challenge is addressed by defining a "double heterogeneity factor" (DHF) that allows conventional light water reactor (LWR) lattice physics codes to analyze VHTR configurations. The challenge of treating Doppler broadening is addressed by a new "on-the-fly" methodology that is applied during the random walk process with negligible impact on computational efficiency. Although this methodology was motivated by the need to treat temperature feedback in a VHTR, it is applicable to any reactor design. The on-the-fly Doppler methodology is based on a combination of Taylor and asymptotic series expansions. The type of series representation was determined by investigating the temperature dependence of U238 resonance cross sections in three regions: near the resonance peaks, mid-resonance, and the resonance wings. The coefficients for these series expansions were determined by regressions over the energy and temperature range of interest. The comparison of the broadened cross sections using this methodology with the NJOY cross sections was excellent. A Monte Carlo code was implemented to apply the combined regression model and used to estimate the additional computing cost which was found to be less than 1%. The DHF accounts for the effect of the particle heterogeneity on resonance absorption in particle fuel. The first level heterogeneity posed by the VHTR fuel particles is a unique characteristic that cannot be accounted for by conventional LWR lattice physics codes. On the other hand, Monte Carlo codes can take into account the detailed geometry of the VHTR including resolution of individual fuel particles without performing any type of resonance approximation. The DHF, basically a self shielding factor, was found to be weakly dependent on space and fuel depletion. The DHF only depends strongly on the packing fraction in a fuel compact. Therefore, it is proposed that DHFs be tabulated as a function of packing fraction to analyze the heterogeneous fuel in VHTR configuration with LWR lattice physics codes.
Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method
NASA Astrophysics Data System (ADS)
Basire, M.; Soudan, J.-M.; Angeli, C.
2014-09-01
The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the ?-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, gp(Ep) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called "corrected EAM" (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients Sij are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature Tm is plotted in terms of the cluster atom number Nat. The standard N_{at}^{-1/3} linear dependence (Pawlow law) is observed for Nat >300, allowing an extrapolation up to the bulk metal at 1940 50 K. For Nat <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I.
Use of Monte Carlo methods in environmental risk assessments at the INEL: Applications and issues
Harris, G.; Van Horn, R.
1996-06-01
The EPA is increasingly considering the use of probabilistic risk assessment techniques as an alternative or refinement of the current point estimate of risk. This report provides an overview of the probabilistic technique called Monte Carlo Analysis. Advantages and disadvantages of implementing a Monte Carlo analysis over a point estimate analysis for environmental risk assessment are discussed. The general methodology is provided along with an example of its implementation. A phased approach to risk analysis that allows iterative refinement of the risk estimates is recommended for use at the INEL.
Spray cooling simulation implementing time scale analysis and the Monte Carlo method
NASA Astrophysics Data System (ADS)
Kreitzer, Paul Joseph
Spray cooling research is advancing the field of heat transfer and heat rejection in high power electronics. Smaller and more capable electronics packages are producing higher amounts of waste heat, along with smaller external surface areas, and the use of active cooling is becoming a necessity. Spray cooling has shown extremely high levels of heat rejection, of up to 1000 W/cm 2 using water. Simulations of spray cooling are becoming more realistic, but this comes at a price. A previous researcher has used CFD to successfully model a single 3D droplet impact into a liquid film using the level set method. However, the complicated multiphysics occurring during spray impingement and surface interactions increases computation time to more than 30 days. Parallel processing on a 32 processor system has reduced this time tremendously, but still requires more than a day. The present work uses experimental and computational results in addition to numerical correlations representing the physics occurring on a heated impingement surface. The current model represents the spray behavior of a Spraying Systems FullJet 1/8-g spray nozzle. Typical spray characteristics are indicated as follows: flow rate of 1.05x10-5 m3/s, normal droplet velocity of 12 m/s, droplet Sauter mean diameter of 48 microm, and heat flux values ranging from approximately 50--100 W/cm2 . This produces non-dimensional numbers of: We 300--1350, Re 750--3500, Oh 0.01--0.025. Numerical and experimental correlations have been identified representing crater formation, splashing, film thickness, droplet size, and spatial flux distributions. A combination of these methods has resulted in a Monte Carlo spray impingement simulation model capable of simulating hundreds of thousands of droplet impingements or approximately one millisecond. A random sequence of droplet impingement locations and diameters is generated, with the proper radial spatial distribution and diameter distribution. Hence the impingement, lifetime and interactions of the droplet impact craters are tracked versus time within the limitations of the current model. A comparison of results from this code to experimental results shows similar trends in surface behavior and heat transfer values. Three methods have been used to directly compare the simulation results with published experimental data, including: contact line length estimates, empirical heat transfer equation calculations, and non-dimensional Nusselt numbers. A Nusselt number of 55.5 was calculated for experimental values, while a Nu of 16.0 was calculated from the simulation.
NASA Astrophysics Data System (ADS)
Rost, D.; Blümer, N.
2015-09-01
We present an algorithm for the computation of unbiased Green functions and selfenergies for quantum lattice models, free from systematic errors and valid in the thermodynamic limit. The method combines direct lattice simulations using the Blankenbecler-Scalapino-Sugar quantum Monte Carlo (BSS-QMC) approach with controlled multigrid extrapolation techniques. We show that the half-filled Hubbard model is insulating at low temperatures even in the weak- coupling regime; the previously claimed Mott transition at intermediate coupling does not exist.
Williams, M. L.; Gehin, J. C.; Clarno, K. T.
2006-07-01
The TSUNAMI computational sequences currently in the SCALE 5 code system provide an automated approach to performing sensitivity and uncertainty analysis for eigenvalue responses, using either one-dimensional discrete ordinates or three-dimensional Monte Carlo methods. This capability has recently been expanded to address eigenvalue-difference responses such as reactivity changes. This paper describes the methodology and presents results obtained for an example advanced CANDU reactor design. (authors)
NASA Astrophysics Data System (ADS)
Timoshenko, Janis; Anspoks, Andris; Kalinko, Aleksandr; Kuzmin, Alexei
2014-04-01
The static disorder and lattice dynamics of crystalline materials can be efficiently studied using reverse Monte Carlo simulations of extended x-ray absorption fine structure spectra (EXAFS). In this work we demonstrate the potentiality of this method on an example of copper tungstate CuWO4. The simultaneous analysis of the Cu K and W L3 edges EXAFS spectra allowed us to follow local structure distortion as a function of temperature.
A study of the XY model by the Monte Carlo method
NASA Technical Reports Server (NTRS)
Suranyi, Peter; Harten, Paul
1987-01-01
The massively parallel processor is used to perform Monte Carlo simulations for the two dimensional XY model on lattices of sizes up to 128 x 128. A parallel random number generator was constructed, finite size effects were studied, and run times were compared with those on a CRAY X-MP supercomputer.
A Straightforward Approach to Markov Chain Monte Carlo Methods for Item Response Models.
ERIC Educational Resources Information Center
Patz, Richard J.; Junker, Brian W.
1999-01-01
Demonstrates Markov chain Monte Carlo (MCMC) techniques that are well-suited to complex models with Item Response Theory (IRT) assumptions. Develops an MCMC methodology that can be routinely implemented to fit normal IRT models, and compares the approach to approaches based on Gibbs sampling. Contains 64 references. (SLD)
An Evaluation of a Markov Chain Monte Carlo Method for the Two-Parameter Logistic Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of the Markov Chain Monte Carlo (MCMC) procedure Gibbs sampling was considered for estimation of item parameters of the two-parameter logistic model. Data for the Law School Admission Test (LSAT) Section 6 were analyzed to illustrate the MCMC procedure. In addition, simulated data sets were analyzed using the MCMC, marginal Bayesian
Evans, J S; Chan, S I; Goddard, W A
1995-10-01
Many interesting proteins possess defined sequence stretches containing negatively charged amino acids. At present, experimental methods (X-ray crystallography, NMR) have failed to provide structural data for many of these sequence domains. We have applied the dihedral probability grid-Monte Carlo (DPG-MC) conformational search algorithm to a series of N- and C-capped polyelectrolyte peptides, (Glu)20, (Asp)20, (PSer)20, and (PSer-Asp)10, that represent polyanionic regions in a number of important proteins, such as parathymosin, calsequestrin, the sodium channel protein, and the acidic biomineralization proteins. The atomic charges were estimated from charge equilibration and the valence and van der Waals parameters are from DREIDING. Solvation of the carboxylate and phosphate groups was treated using sodium counterions for each charged side chain (one Na+ for COO-; two Na for CO(PO3)-2) plus a distance-dependent (shielded) dielectric constant, epsilon = epsilon 0 R, to simulate solvent water. The structures of these polyelectrolyte polypeptides were obtained by the DPG-MC conformational search with epsilon 0 = 10, followed by calculation of solvation energies for the lowest energy conformers using the protein dipole-Langevin dipole method of Warshel. These calculations predict a correlation between amino acid sequence and global folded conformational minima: 1. Poly-L-Glu20, our structural benchmark, exhibited a preference for right-handed alpha-helix (47% helicity), which approximates experimental observations of 55-60% helicity in solution. 2. For Asp- and PSer-containing sequences, all conformers exhibited a low preference for right-handed alpha-helix formation (< or = 10%), but a significant percentage (approximately 20% or greater) of beta-strand and beta-turn dihedrals were found in all three sequence cases: (1) Aspn forms supercoil conformers, with a 2:1:1 ratio of beta-turn:beta-strand:alpha-helix dihedral angles; (2) PSer20 features a nearly 1:1 ratio of beta-turn:beta-sheet dihedral preferences, with very little preference for alpha-helical structure, and possesses short regions of strand and turn combinations that give rise to a collapsed bend or hairpin structure; (3) (PSer-Asp)10 features a 3:2:1 ratio of beta-sheet:beta-turn:alpha-helix and gives rise to a superturn or C-shaped structure. PMID:8535238
Kumar, Sudhir; Srinivasan, P; Sharma, S D; Saxena, Sanjay Kumar; Bakshi, A K; Dash, Ashutosh; Babu, D A R; Sharma, D N
2015-09-01
Isotope production and Application Division of Bhabha Atomic Research Center developed (32)P patch sources for treatment of superficial tumors. Surface dose rate of a newly developed (32)P patch source of nominal diameter 25 mm was measured experimentally using standard extrapolation ionization chamber and Gafchromic EBT film. Monte Carlo model of the (32)P patch source along with the extrapolation chamber was also developed to estimate the surface dose rates from these sources. The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and radiochromic films are 82.034.18 (k=2) and 79.132.53 (k=2) respectively. The two values of the surface dose rates measured using the two independent experimental methods are in good agreement to each other within a variation of 3.5%. The surface dose rate to tissue (cGy/min) estimated using the MCNP Monte Carlo code works out to be 77.781.16 (k=2). The maximum deviation between the surface dose rates to tissue obtained by Monte Carlo and the extrapolation chamber method is 5.2% whereas the difference between the surface dose rates obtained by radiochromic film measurement and the Monte Carlo simulation is 1.7%. The three values of the surface dose rates of the (32)P patch source obtained by three independent methods are in good agreement to one another within the uncertainties associated with their measurements and calculation. This work has demonstrated that MCNP based electron transport simulations are accurate enough for determining the dosimetry parameters of the indigenously developed (32)P patch sources for contact brachytherapy applications. PMID:26086681
Pauw, Brian R.; Pedersen, Jan Skov; Tardif, Samuel; Takata, Masaki; Iversen, Bo B.
2013-01-01
Monte Carlo (MC) methods, based on random updates and the trial-and-error principle, are well suited to retrieve form-free particle size distributions from small-angle scattering patterns of non-interacting low-concentration scatterers such as particles in solution or precipitates in metals. Improvements are presented to existing MC methods, such as a non-ambiguous convergence criterion, nonlinear scaling of contributions to match their observability in a scattering measurement, and a method for estimating the minimum visibility threshold and uncertainties on the resulting size distributions. PMID:23596341
Development of CT scanner models for patient organ dose calculations using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Gu, Jianwei
There is a serious and growing concern about the CT dose delivered by diagnostic CT examinations or image-guided radiation therapy imaging procedures. To better understand and to accurately quantify radiation dose due to CT imaging, Monte Carlo based CT scanner models are needed. This dissertation describes the development, validation, and application of detailed CT scanner models including a GE LightSpeed 16 MDCT scanner and two image guided radiation therapy (IGRT) cone beam CT (CBCT) scanners, kV CBCT and MV CBCT. The modeling process considered the energy spectrum, beam geometry and movement, and bowtie filter (BTF). The methodology of validating the scanner models using reported CTDI values was also developed and implemented. Finally, the organ doses to different patients undergoing CT scan were obtained by integrating the CT scanner models with anatomically-realistic patient phantoms. The tube current modulation (TCM) technique was also investigated for dose reduction. It was found that for RPI-AM, thyroid, kidneys and thymus received largest dose of 13.05, 11.41 and 11.56 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. For RPI-AF, thymus, small intestine and kidneys received largest dose of 10.28, 12.08 and 11.35 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. The dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. For MDCT with TCM schemas, the fetal dose can be reduced with 14%-25%. To demonstrate the applicability of the method proposed in this dissertation for modeling the CT scanner, additional MDCT scanner was modeled and validated by using the measured CTDI values. These results demonstrated that the CT scanner models in this dissertation were versatile and accurate tools for estimating dose to different patient phantoms undergoing various CT procedures. The organ doses from kV and MV CBCT were also calculated. This dissertation finally summarizes areas where future research can be performed including MV CBCT further validation and application, dose reporting software and image and dose correlation study.
Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method
Basire, M.; Soudan, J.-M.; Angeli, C.
2014-09-14
The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the ?-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, g{sub p}(E{sub p}) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called corrected EAM (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients S{sub ij} are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature T{sub m} is plotted in terms of the cluster atom number N{sub at}. The standard N{sub at}{sup ?1/3} linear dependence (Pawlow law) is observed for N{sub at} >300, allowing an extrapolation up to the bulk metal at 1940 50 K. For N{sub at} <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I.
Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method.
Basire, M; Soudan, J-M; Angeli, C
2014-09-14
The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the ?-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, gp(Ep) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called "corrected EAM" (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients S(ij) are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature T(m) is plotted in terms of the cluster atom number N(at). The standard N(at)(-1/3) linear dependence (Pawlow law) is observed for N(at) >300, allowing an extrapolation up to the bulk metal at 1940 50 K. For N(at) <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I. PMID:25217913
NASA Astrophysics Data System (ADS)
Rinaldi, G.; Ciarniello, M.; Capaccioni, F.; Fink, U.; Filacchione, G.; Tozzi, G. P.; B??cka, M.
2014-04-01
In this paper we present simulations of the radiance coming from the coma of 67/P Churyumov- Gerasimenko, that are meant to support the scientific investigation of VIRTIS (Visible and Infrared Thermal Imaging Spectrometer) instrument onboard of the Rosetta spacecraft, working in the 0.25-5 ?m spectral range. During the observation plan phase such simulations drive the selection of the integration times and spacecraft's pointing while during the postprocessing phase the same model shall be used to retrieve the physical properties of the coma. Cometary coma spectra are strongly affected by the dynamical processes involving dust and ice grains present in the coma. The solar light illuminates the grains that can scatter, absorb and emit radiation. Radiative transfer in the coma can be modeled by means of Monte Carlo methods. Here we show results from two different routines: SCATRD 06.10 code (Vasilyev et al., 2006) and 3D Monte Carlo code developed by (Ciarniello et al., 2014).
Anderson, Eric C
2005-06-01
This article presents an efficient importance-sampling method for computing the likelihood of the effective size of a population under the coalescent model of Berthier et al. Previous computational approaches, using Markov chain Monte Carlo, required many minutes to several hours to analyze small data sets. The approach presented here is orders of magnitude faster and can provide an approximation to the likelihood curve, even for large data sets, in a matter of seconds. Additionally, confidence intervals on the estimated likelihood curve provide a useful estimate of the Monte Carlo error. Simulations show the importance sampling to be stable across a wide range of scenarios and show that the N(e) estimator itself performs well. Further simulations show that the 95% confidence intervals around the N(e) estimate are accurate. User-friendly software implementing the algorithm for Mac, Windows, and Unix/Linux is available for download. Applications of this computational framework to other problems are discussed. PMID:15834143
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
During extreme-Mach number reentry into Earth's atmosphere, spacecraft experience hypersonic non-equilibrium flow conditions that dissociate molecules and ionize atoms. Such situations occur behind a shock wave leading to high temperatures, which have an adverse effect on the thermal protection system and radar communications. Since the electronic energy levels of gaseous species are strongly excited for high Mach number conditions, the radiative contribution to the total heat load can be significant. In addition, radiative heat source within the shock layer may affect the internal energy distribution of dissociated and weakly ionized gas species and the number density of ablative species released from the surface of vehicles. Due to the radiation total heat load to the heat shield surface of the vehicle may be altered beyond mission tolerances. Therefore, in the design process of spacecrafts the effect of radiation must be considered and radiation analyses coupled with flow solvers have to be implemented to improve the reliability during the vehicle design stage. To perform the first stage for radiation analyses coupled with gas-dynamics, efficient databasing schemes for emission and absorption coefficients were developed to model radiation from hypersonic, non-equilibrium flows. For bound-bound transitions, spectral information including the line-center wavelength and assembled parameters for efficient calculations of emission and absorption coefficients are stored for typical air plasma species. Since the flow is non-equilibrium, a rate equation approach including both collisional and radiatively induced transitions was used to calculate the electronic state populations, assuming quasi-steady-state (QSS). The Voigt line shape function was assumed for modeling the line broadening effect. The accuracy and efficiency of the databasing scheme was examined by comparing results of the databasing scheme with those of NEQAIR for the Stardust flowfield. An accuracy of approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking into account non-local radiation. The definition of the escape factor which is dependent on the incoming radiative intensity from over all directions is presented. The effect of the escape factor on the distribution of electronic state populations of the atomic N and O radiating species is examined in a highly non-equilibrium flow condition using DSMC and PMC methods and the corresponding change of the radiative heat flux due to the non-local radiation is also investigated.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
Hall, Clifford; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 ; Ji, Weixiao; Blaisten-Barojas, Estela; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030
2014-02-01
We present a CPUGPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPUGPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPUGPU duets. -- Highlights: We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPUGPU duet. The Adaptive Tempering Monte Carlo employs MMC and profits from this CPUGPU implementation. Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. The testbed involves a polymeric system of oligopyrroles in the condensed phase. The CPUGPU parallelization includes dipoledipole and MieJones classic potentials.
Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations
Van Siclen, Clinton D
2007-02-01
A computationally simple way to accommodate "basins" of trapping states in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.
Wang Haifeng Popov, Pavel P.; Pope, Stephen B.
2010-03-01
We study a class of methods for the numerical solution of the system of stochastic differential equations (SDEs) that arises in the modeling of turbulent combustion, specifically in the Monte Carlo particle method for the solution of the model equations for the composition probability density function (PDF) and the filtered density function (FDF). This system consists of an SDE for particle position and a random differential equation for particle composition. The numerical methods considered advance the solution in time with (weak) second-order accuracy with respect to the time step size. The four primary contributions of the paper are: (i) establishing that the coefficients in the particle equations can be frozen at the mid-time (while preserving second-order accuracy), (ii) examining the performance of three existing schemes for integrating the SDEs, (iii) developing and evaluating different splitting schemes (which treat particle motion, reaction and mixing on different sub-steps), and (iv) developing the method of manufactured solutions (MMS) to assess the convergence of Monte Carlo particle methods. Tests using MMS confirm the second-order accuracy of the schemes. In general, the use of frozen coefficients reduces the numerical errors. Otherwise no significant differences are observed in the performance of the different SDE schemes and splitting schemes.
Calculation of images from an anthropomorphic chest phantom using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Ullman, Gustaf; Malusek, Alexandr; Sandborg, Michael; Dance, David R.; Alm Carlsson, Gudrun
2006-03-01
Monte Carlo (MC) computer simulation of chest x-ray imaging systems has hitherto been performed using anthropomorphic phantoms with too large (3 mm) voxel sizes. The aim for this work was to develop and use a Monte Carlo computer program to compute projection x-ray images of a high-resolution anthropomorphic voxel phantom for visual clinical image quality evaluation and dose-optimization. An Alderson anthropomorphic chest phantom was imaged in a CT-scanner and reconstructed with isotropic voxels of 0.7 mm. The phantom was segmented and included in a Monte Carlo computer program using the collision density estimator to derive the energies imparted to the detector per unit area of each pixel by scattered photons. The image due to primary photons was calculated analytically including a pre-calculated detector response function. Attenuation and scatter of x-rays in the phantom, grid and image detector was considered. Imaging conditions (tube voltage, anti-scatter device) were varied and the images compared to a real computed radiography (Fuji FCR 9501) image. Four imaging systems were simulated (two tube voltages 81 kV and 141 kV using either a grid with ratio 10 or a 30 cm air gap). The effect of scattered radiation on the visibility of thoracic vertebrae against the heart and lungs is demonstrated. The simplicity in changing the imaging conditions will allow us not only to produce images of existing imaging systems, but also of hypothetical, future imaging systems. We conclude that the calculated images of the high-resolution voxel phantom are suitable for human detection experiments of low-contrast lesions.
MC-Fit: using Monte-Carlo methods to get accurate confidence limits on enzyme parameters.
Dardel, F
1994-06-01
A program is described for estimating enzymatic parameters from experimental data using Apple Macintosh computers. MC-Fit uses iterative least-square fitting and Monte-Carlo sampling to get accurate estimates of the confidence limits. This approach is more robust than the conventional covariance matrix estimation, especially in cases where experimental data is partially lacking or when the standard error on individual measurements is large. This happens quite often when analysing the properties of variant enzymes obtained by mutagenesis, as these can have severely impaired activities and reduced affinities for their substrates. PMID:7922682
Betzler, Benjamin R.; Kiedrowski, Brian C.; Brown, Forrest B.; Martin, William R.
2015-08-28
The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing ? eigenvalues and eigenvectors in an infinite medium. In this study, a research Monte Carlo code called TORTE (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.
Two active-electron classical trajectory Monte Carlo methods for ion-He collisions
Guzman, F.; Errea, L. F.; Pons, B.
2009-10-15
We introduce two active-electron classical trajectory Monte Carlo models for ion-He collisions, in which the electron-electron force is smoothed using a Gaussian kernel approximation for the pointwise classical particles. A first model uses independent pairs of Gaussian electrons, while a second one employs time-dependent mean-field theory to define an averaged electron-electron repulsion force. These models are implemented for prototypical p+He collisions and the results are compared to available experimental and theoretical data.
Smith, William R; Lsal, Martin
2002-07-01
A Monte Carlo computer simulation method is presented for directly performing property predictions for fluid systems at fixed total internal energy, U, or enthalpy, H, using a molecular-level system model. The method is applicable to both nonreacting and reacting systems. Potential applications are to (1) adiabatic flash (Joule-Thomson expansion) calculations for nonreacting pure fluids and mixtures at fixed (H,P), where P is the pressure; and (2) adiabatic (flame-temperature) calculations at fixed (U,V) or (H,P), where V is the system volume. The details of the method are presented. The method is compared with existing related simulation methodologies for nonreacting systems, one of which addresses the problem involving fixing portions of U or of H, and one of which solves the problem at fixed H considered here by means of an indirect approach. We illustrate the method by an adiabatic calculation involving the ammonia synthesis reaction. PMID:12241338
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method
NASA Astrophysics Data System (ADS)
Moralles, M.; Guimarães, C. C.; Okuno, E.
2005-06-01
Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF 2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the range 20-300 kV, were obtained by simulating a X-ray Philips MG-450 tube associated with the recommended filters. A realistic photon distribution of a 60Co radiotherapy source was taken from results of Monte Carlo simulations found in the literature. Comparison between simulated and experimental results revealed that the attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account, while this effect is negligible for lithium fluoride. Differences between results obtained by heating the dosimeter from the irradiated side and from the opposite side allowed the determination of the light attenuation coefficient for CaF 2:NaCl (mass proportion 60:40) as 2.2 mm -1.
Structural properties of sodium microclusters (n=4-34) using a Monte Carlo growth method
NASA Astrophysics Data System (ADS)
Poteau, Romuald; Spiegelmann, Fernand
1993-04-01
The structural and electronic properties of small sodium clusters are investigated using a distance-dependent extension of the tight-binding (Hckel) model and a Monte Carlo growth algorithm for the search of the lowest energy isomers. The efficiency and advantages of the Monte Carlo growth algorithm are discussed and the building scheme of sodium microclusters around constituting seeds is explained in details. The pentagonal-based seeds (pentagonal bipyramids and icosahedral structures) are shown to play an increasing role beyond n=12. Optimized geometries of Nan clusters are obtained in the range n=4-21 and for n=34. In particular, Na20 is found to have C3 symmetry, hardly prolate with all axial ratios almost equivalent, whereas Na34 has D5h symmetry and consists of a doubly icosahedral seed of 19 atoms surrounded by a ring of 15 atoms. Stabilities, fragmentation channels, and one-electron orbital levels are derived for the lowest isomers and shown to be characterized by a regular odd-even alternation. The present results are in generally good correspondence with previous nuclei-based calculations when available. The global shapes of clusters, as well as the shape-induced fine structure splitting of the spherical electronic jellium shell are found, with a few exceptions, to be also consistent with the ellipsoidal or spheroidal versions of the jellium model.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
NASA Astrophysics Data System (ADS)
Hall, Clifford; Ji, Weixiao; Blaisten-Barojas, Estela
2014-02-01
We present a CPU-GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU-GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU-GPU duets.
Gudjonson, Herman; Kats, Mikhail A.; Liu, Kun; Nie, Zhihong; Kumacheva, Eugenia; Capasso, Federico
2014-01-01
Many experimental systems consist of large ensembles of uncoupled or weakly interacting elements operating as a single whole; this is particularly the case for applications in nano-optics and plasmonics, including colloidal solutions, plasmonic or dielectric nanoparticles on a substrate, antenna arrays, and others. In such experiments, measurements of the optical spectra of ensembles will differ from measurements of the independent elements as a result of small variations from element to element (also known as polydispersity) even if these elements are designed to be identical. In particular, sharp spectral features arising from narrow-band resonances will tend to appear broader and can even be washed out completely. Here, we explore this effect of inhomogeneous broadening as it occurs in colloidal nanopolymers comprising self-assembled nanorod chains in solution. Using a technique combining finite-difference time-domain simulations and Monte Carlo sampling, we predict the inhomogeneously broadened optical spectra of these colloidal nanopolymers and observe significant qualitative differences compared with the unbroadened spectra. The approach combining an electromagnetic simulation technique with Monte Carlo sampling is widely applicable for quantifying the effects of inhomogeneous broadening in a variety of physical systems, including those with many degrees of freedom that are otherwise computationally intractable. PMID:24469797
NASA Astrophysics Data System (ADS)
Khisamutdinov, A. I.; Velker, N. N.
2014-05-01
The talk examines a system of pairwise interaction particles, which models a rarefied gas in accordance with the nonlinear Boltzmann equation, the master equations of Markov evolution of this system and corresponding numerical Monte Carlo methods. Selection of some optimal method for simulation of rarefied gas dynamics depends on the spatial size of the gas flow domain. For problems with the Knudsen number Kn of order unity "imitation", or "continuous time", Monte Carlo methods ([2]) are quite adequate and competitive. However if Kn <= 0.1 (the large sizes), excessive punctuality, namely, the need to see all the pairs of particles in the latter, leads to a significant increase in computational cost(complexity). We are interested in to construct the optimal methods for Boltzmann equation problems with large enough spatial sizes of the flow. Speaking of the optimal, we mean that we are talking about algorithms for parallel computation to be implemented on high-performance multi-processor computers. The characteristic property of large systems is the weak dependence of sub-parts of each other at a sufficiently small time intervals. This property is taken into account in the approximate methods using various splittings of operator of corresponding master equations. In the paper, we develop the approximate method based on the splitting of the operator of master equations system "over groups of particles" ([7]). The essence of the method is that the system of particles is divided into spatial subparts which are modeled independently for small intervals of time, using the precise"imitation" method. The type of splitting used is different from other well-known type "over collisions and displacements", which is an attribute of the known Direct simulation Monte Carlo methods. The second attribute of the last ones is the grid of the "interaction cells", which is completely absent in the imitation methods. The main of talk is parallelization of the imitation algorithms with splitting using the MPI library. New constructed algorithms are applied to solve the problems: on propagation of the temperature discontinuity and on plane Poiseuille flow in the field of external forces. In particular, on the basis of numerical solutions, comparative estimates of the computational cost are given for all algorithms under consideration.
Pazirandeh, Ali; Azizi, Maryam; Farhad Masoudi, S
2006-01-01
Among many conventional techniques, nuclear techniques have shown to be faster, more reliable, and more effective in detecting explosives. In the present work, neutrons from a 5 Ci Am-Be neutron source being in water tank are captured by elements of soil and landmine (TNT), namely (14)N, H, C, and O. The prompt capture gamma-ray spectrum taken by a NaI (Tl) scintillation detector indicates the characteristic photo peaks of the elements in soil and landmine. In the high-energy region of the gamma-ray spectrum, besides 10.829 MeV of (15)N, single escape (SE) and double escape (DE) peaks are unmistakable photo peaks, which make the detection of concealed explosive possible. The soil has the property of moderating neutrons as well as diffusing the thermal neutron flux. Among many elements in soil, silicon is more abundant and (29)Si emits 10.607 MeV prompt capture gamma-ray, which makes 10.829 MeV detection difficult. The Monte Carlo simulation was used to adjust source-target-detector distances and soil moisture content to yield the best result. Therefore, we applied MCNP4C for configuration very close to reality of a hidden landmine in soil. PMID:16081298
NASA Astrophysics Data System (ADS)
Wang, Dong; Tse, Peter W.
2015-05-01
Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.
Ma, L X; Wang, F Q; Wang, C A; Wang, C C; Tan, J Y
2015-11-20
Spectral properties of sea foam greatly affect ocean color remote sensing and aerosol optical thickness retrieval from satellite observation. This paper presents a combined Mie theory and Monte Carlo method to investigate visible and near-infrared spectral reflectance and bidirectional reflectance distribution function (BRDF) of sea foam layers. A three-layer model of the sea foam is developed in which each layer is composed of large air bubbles coated with pure water. A pseudo-continuous model and Mie theory for coated spheres is used to determine the effective radiative properties of sea foam. The one-dimensional Cox-Munk surface roughness model is used to calculate the slope density functions of the wind-blown ocean surface. A Monte Carlo method is used to solve the radiative transfer equation. Effects of foam layer thickness, bubble size, wind speed, solar zenith angle, and wavelength on the spectral reflectance and BRDF are investigated. Comparisons between previous theoretical results and experimental data demonstrate the feasibility of our proposed method. Sea foam can significantly increase the spectral reflectance and BRDF of the sea surface. The absorption coefficient of seawater near the surface is not the only parameter that influences the spectral reflectance. Meanwhile, the effects of bubble size, foam layer thickness, and solar zenith angle also cannot be obviously neglected. PMID:26836550
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.
1990-01-01
Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.
NASA Astrophysics Data System (ADS)
Guo, Shi; Zhu, Minyi; Hu, Shuming; Mitas, Lubos
2013-03-01
Very recently, a quantum Monte Carlo (QMC) method was proposed for Rashba spin-orbit operators which expands the applicability of QMC to systems with variable spins. It is based on incorporating the spin-orbit into the Green's function and thus samples (ie, rotates) the spinors in the antisymmetric part of the trial function [1]. Here we propose a new alternative for both variational and diffusion Monte Carlo algorithms for calculations of systems with variable spins. Specifically, we introduce a new spin representation which allows us to sample the spin configurations efficiently and without introducing additional fluctuations. We develop the corresponding Green's function which treats the electron spin as a dynamical variable and we use the fixed-phase approximation to eliminate the negative probabilities. The trial wave function is a Slater determinant of spinors and spin-indepedent Jastrow correlations. The method also has the zero variance property. We benchmark the method on the 2D electron gas with the Rashba interaction and we find very good overall agreement with previously obtained results. Research supported by NSF and ARO.
Lin, Uei-Tyng; Chu, Chien-Hau
2006-05-01
Monte Carlo method was used to simulate the correction factors for electron loss and scattered photons for two improved cylindrical free-air ionization chambers (FACs) constructed at the Institute of Nuclear Energy Research (INER, Taiwan). The method is based on weighting correction factors for mono-energetic photons with X-ray spectra. The newly obtained correction factors for the medium-energy free-air chamber were compared with the current values, which were based on a least-squares fit to experimental data published in the NBS Handbook 64 [Wyckoff, H.O., Attix, F.H., 1969. Design of free-air ionization chambers. National Bureau Standards Handbook, No. 64. US Government Printing Office, Washington, DC, pp. 1-16; Chen, W.L., Su, S.H., Su, L.L., Hwang, W.S., 1999. Improved free-air ionization chamber for the measurement of X-rays. Metrologia 36, 19-24]. The comparison results showed the agreement between the Monte Carlo method and experimental data is within 0.22%. In addition, mono-energetic correction factors for the low-energy free-air chamber were calculated. Average correction factors were then derived for measured and theoretical X-ray spectra at 30-50 kVp. Although the measured and calculated spectra differ slightly, the resulting differences in the derived correction factors are less than 0.02%. PMID:16427292
A new method to calculate the response of the WENDI-II rem counter using the FLUKA Monte Carlo Code
NASA Astrophysics Data System (ADS)
Jgerhofer, Lukas; Feldbaumer, Eduard; Theis, Christian; Roesler, Stefan; Vincke, Helmut
2012-11-01
The FHT-762 WENDI-II is a commercially available wide range neutron rem counter which uses a 3He counter tube inside a polyethylene moderator. To increase the response above 10 MeV of kinetic neutron energy, a layer of tungsten powder is implemented into the moderator shell. For the purpose of the characterization of the response, a detailed model of the detector was developed and implemented for FLUKA Monte Carlo simulations. In common practice Monte Carlo simulations are used to calculate the neutron fluence inside the active volume of the detector. The resulting fluence is then folded offline with the reaction rate of the 3He(n,p)3H reaction to yield the proton-triton production rate. Consequently this approach does not consider geometrical effects like wall effects, where one or both reaction products leave the active volume of the detector without triggering a count. This work introduces a two-step simulation method which can be used to determine the detector's response, including geometrical effects, directly, using Monte Carlo simulations. A "first step" simulation identifies the 3He(n,p)3H reaction inside the active volume of the 3He counter tube and records its position. In the "second step" simulation the tritons and protons are started in accordance with the kinematics of the 3He(n,p)3H reaction from the previously recorded positions and a correction factor for geometrical effects is determined. The three dimensional Monte Carlo model of the detector as well as the two-step simulation method were evaluated and tested in the well-defined fields of an 241Am-Be(?,n) source as well as in the field of a 252Cf source. Results were compared with measurements performed by Gutermuth et al. [1] at GSI with an 241Am-Be(?,n) source as well as with measurements performed by the manufacturer in the field of a 252Cf source. Both simulation results show very good agreement with the respective measurements. After validating the method, the response values in terms of counts per unit fluence were calculated for 95 different incident neutron energies between 1 meV and 5 GeV.
Modeling of radiation-induced bystander effect using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun
2009-03-01
Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.
A Markov-Chain Monte-Carlo Based Method for Flaw Detection in Beams
Glaser, R E; Lee, C L; Nitao, J J; Hickling, T L; Hanley, W G
2006-09-28
A Bayesian inference methodology using a Markov Chain Monte Carlo (MCMC) sampling procedure is presented for estimating the parameters of computational structural models. This methodology combines prior information, measured data, and forward models to produce a posterior distribution for the system parameters of structural models that is most consistent with all available data. The MCMC procedure is based upon a Metropolis-Hastings algorithm that is shown to function effectively with noisy data, incomplete data sets, and mismatched computational nodes/measurement points. A series of numerical test cases based upon a cantilever beam is presented. The results demonstrate that the algorithm is able to estimate model parameters utilizing experimental data for the nodal displacements resulting from specified forces.
Torsional path integral Monte Carlo method for the quantum simulation of large molecules
NASA Astrophysics Data System (ADS)
Miller, Thomas F.; Clary, David C.
2002-05-01
A molecular application is introduced for calculating quantum statistical mechanical expectation values of large molecules at nonzero temperatures. The Torsional Path Integral Monte Carlo (TPIMC) technique applies an uncoupled winding number formalism to the torsional degrees of freedom in molecular systems. The internal energy of the molecules ethane, n-butane, n-octane, and enkephalin are calculated at standard temperature using the TPIMC technique and compared to the expectation values obtained using the harmonic oscillator approximation and a variational technique. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the TPIMC technique. The harmonic oscillator approximation approach to calculating the internal energy performs well for the molecules presented in this study but is limited by its neglect of both anharmonicity effects and the potential coupling of intramolecular torsions.
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
Rijken, J D; Harris-Phillips, W; Lawson, J M
2015-03-01
Lithium fluoride thermoluminescent dosimeters (TLDs) exhibit a dependence on the energy of the radiation beam of interest so need to be carefully calibrated for different energy spectra if used for clinical radiation oncology beam dosimetry and quality assurance. TLD energy response was investigated for a specific set of TLD700:LiF(Mg,Ti) chips for a high dose rate (192)Ir brachytherapy source. A novel method of energy response calculation for (192)Ir was developed where dose was determined through Monte Carlo modelling in Geant4. The TLD response was then measured experimentally. Results showed that TLD700 has a depth dependent response in water ranging from 1.170 0.125 at 20 mm to 0.976 0.043 at 50 mm (normalised to a nominal 6 MV beam response). The method of calibration and Monte Carlo data developed through this study could be easily applied by other Medical Physics departments seeking to use TLDs for (192)Ir patient dosimetry or treatment planning system experimental verification. PMID:25663432
Stoller, Roger E; Golubov, Stanislav I; Becquart, C. S.; Domain, C.
2007-08-01
The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory, Monte Carlo, or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales, and timescales from those characteristic of diffusion to long-term microstructural evolution (~?s to years). Although the rate theory and Monte Carlo models can be used simulate the same phenomena, some of the details are handled quite differently in the two approaches. Models employing the rate theory have been extensively used to describe radiation-induced phenomena such as void swelling and irradiation creep. The primary approximations in such models are time- and spatial averaging of the radiation damage source term, and spatial averaging of the microstructure into an effective medium. Kinetic Monte Carlo models can account for these spatial and temporal correlations; their primary limitation is the computational burden which is related to the size of the simulation cell. A direct comparison of RT and object kinetic MC simulations has been made in the domain of point defect cluster dynamics modeling, which is relevant to the evolution (both nucleation and growth) of radiation-induced defect structures. The primary limitations of the OKMC model are related to computational issues. Even with modern computers, the maximum simulation cell size and the maximum dose (typically much less than 1 dpa) that can be simulated are limited. In contrast, even very detailed RT models can simulate microstructural evolution for doses up 100 dpa or greater in clock times that are relatively short. Within the context of the effective medium, essentially any defect density can be simulated. Overall, the agreement between the two methods is best for irradiation conditions which produce a high density of defects (lower temperature and higher displacement rate), and for materials that have a relatively high density of fixed sinks such as dislocations.
Williams, Michael S; Ebel, Eric D
2014-11-18
The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the magnitude of biases in an estimator that ignores the effects of an unequal probability sample design. PMID:25333423
NASA Astrophysics Data System (ADS)
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.
2015-07-01
Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G
2015-07-01
Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states. PMID:26274303
Dupuis, Paul
2014-03-14
This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.
Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
NASA Astrophysics Data System (ADS)
Densmore, J. D.; Park, H.; Wollaber, A. B.; Rauenzahn, R. M.; Knoll, D. A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption-emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck-Cummings algorithm.
NASA Astrophysics Data System (ADS)
Srinivasan, P.; Priya, S.; Patel, Tarun; Gopalakrishnan, R. K.; Sharma, D. N.
2015-01-01
DD/DT fusion neutron generators are used as sources of 2.5 MeV/14.1 MeV neutrons in experimental laboratories for various applications. Detailed knowledge of the radiation dose rates around the neutron generators are essential for ensuring radiological protection of the personnel involved with the operation. This work describes the experimental and Monte Carlo studies carried out in the Purnima Neutron Generator facility of the Bhabha Atomic Research Center (BARC), Mumbai. Verification and validation of the shielding adequacy was carried out by measuring the neutron and gamma dose-rates at various locations inside and outside the neutron generator hall during different operational conditions both for 2.5-MeV and 14.1-MeV neutrons and comparing with theoretical simulations. The calculated and experimental dose rates were found to agree with a maximum deviation of 20% at certain locations. This study has served in benchmarking the Monte Carlo simulation methods adopted for shield design of such facilities. This has also helped in augmenting the existing shield thickness to reduce the neutron and associated gamma dose rates for radiological protection of personnel during operation of the generators at higher source neutron yields up to 1 1010 n/s.
NASA Astrophysics Data System (ADS)
Ma, C. Y.; Zhao, J. M.; Liu, L. H.; Zhang, L.; Li, X. C.; Jiang, B. C.
2016-03-01
Inverse identification of radiative properties of participating media is usually time consuming. In this paper, a GPU accelerated inverse identification model is presented to obtain the radiative properties of particle suspensions. The sample medium is placed in a cuvette and a narrow light beam is irradiated normally from the side. The forward three-dimensional radiative transfer problem is solved using a massive parallel Monte Carlo method implemented on graphics processing unit (GPU), and particle swarm optimization algorithm is applied to inversely identify the radiative properties of particle suspensions based on the measured bidirectional scattering distribution function (BSDF). The GPU-accelerated Monte Carlo simulation significantly reduces the solution time of the radiative transfer simulation and hence greatly accelerates the inverse identification process. Hundreds of speedup is achieved as compared to the CPU implementation. It is demonstrated using both simulated BSDF and experimentally measured BSDF of microalgae suspensions that the radiative properties of particle suspensions can be effectively identified based on the GPU-accelerated algorithm with three-dimensional radiative transfer modelling.
NASA Astrophysics Data System (ADS)
Cranmer-Sargison, G.; Beckham, W. A.; Popescu, I. A.
2004-04-01
The goal of this study was to quantify, in a heterogeneous phantom, the difference between experimentally measured beam profiles and those calculated using both a commercial convolution algorithm and the Monte Carlo (MC) method. This was done by arranging a phantom geometry that incorporated a vertical solid water-lung material interface parallel to the beam axis. At nominal x-ray energies of 6 and 18 MV, dose distributions were modelled for field sizes of 10 10 cm2 and 4 4 cm2 using the CadPlan 6.0 commercial treatment planning system (TPS) and the BEAMnrc-DOSXYZnrc Monte Carlo package. Beam profiles were found experimentally at various depths using film dosimetry. The results showed that within the lung region the TPS had a substantial problem modelling the dose distribution. The (film-TPS) profile difference was found to increase, in the lung region, as the field size decreased and the beam energy increased; in the worst case the difference was more than 15%. In contrast, (film-MC) profile differences were not found to be affected by the material density difference. BEAMnrc-DOSXYZnrc successfully modelled the material interface and dose profiles to within 2%.
Ridikas, D; Feray, S; Cometto, M; Damoy, F
2005-01-01
During the decommissioning of the SATURNE accelerator at CEA Saclay (France), a number of concrete containers with radioactive materials of low or very low activity had to be characterised before their final storage. In this paper, a non-destructive approach combining gamma ray spectroscopy and Monte Carlo simulations is used in order to characterise massive concrete blocks containing some radioactive waste. The limits and uncertainties of the proposed method are quantified for the source term activity estimates using 137Cs as a tracer element. A series of activity measurements with a few representative waste containers were performed before and after destruction. It has been found that neither was the distribution of radioactive materials homogeneous nor was its density unique, and this became the major source of systematic errors in this study. Nevertheless, we conclude that by combining gamma ray spectroscopy and full scale Monte Carlo simulations one can estimate the source term activity for some tracer elements such as 134Cs, 137Cs, 60Co, etc. The uncertainty of this estimation should not be bigger than a factor of 2-3. PMID:16381694
Dieudonne, C.; Dumonteil, E.; Malvagi, F.; Diop, C. M.
2013-07-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple a Monte Carlo code to simulate the neutron transport to a deterministic method that computes the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3 dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the time-expensive Monte Carlo solver called at each time step. Therefore, great improvements in term of calculation time could be expected if one could get rid of Monte Carlo transport sequences. For example, it may seem interesting to run an initial Monte Carlo simulation only once, for the first time/burnup step, and then to use the concentration perturbation capability of the Monte Carlo code to replace the other time/burnup steps (the different burnup steps are seen like perturbations of the concentrations of the initial burnup step). This paper presents some advantages and limitations of this technique and preliminary results in terms of speed up and figure of merit. Finally, we will detail different possible calculation scheme based on that method. (authors)
Webb, S; Fox, R A
1980-03-01
A Monte Carlo computer program has been used to calculate axial and off-axis depth dose distributions arising from the interaction of an external beam of 60Co radiation with a medium containing inhomogeneities. An approximation for applying the Monte Carlo data to the configuration where the lateral extent of the inhomogeneity is less than the beam area, is also presented. These new Monte Carlo techniques rely on integration over the dose distributions from constituent sub-beams of small area and the accuracy of the method is thus independent of beam size. The power law correction equation (Batho equation) describing the dose distribution in the presence of tissue inhomogeneities is derived in its most general form. By comparison with Monte Carlo reference data, the equation is validated for routine patient dosimetry. It is explained why the Monte Carlo data may be regarded as a fundamental reference point in performing these tests of the extension to the Batho equation. Other analytic correction techniques, e.g. the equivalent radiological path method, are shown to be less accurate. The application of the generalised power law equation in conjunction with CT scanner data is discussed. For ease of presentation, the details of the Monte Carlo techniques and the analytic formula have been separated into appendices. PMID:7384209
Nease, Brian R. Ueki, Taro
2009-12-10
A time series approach has been applied to the nuclear fission source distribution generated by Monte Carlo (MC) particle transport in order to calculate the non-fundamental mode eigenvalues of the system. The novel aspect is the combination of the general technical principle of projection pursuit for multivariate data with the neutron multiplication eigenvalue problem in the nuclear engineering discipline. Proof is thoroughly provided that the stationary MC process is linear to first order approximation and that it transforms into one-dimensional autoregressive processes of order one (AR(1)) via the automated choice of projection vectors. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern MC codes for nuclear criticality calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. This time series approach was tested for a variety of problems including multi-dimensional ones. Numerical results show that the time series approach has strong potential for three dimensional whole reactor core. The eigenvalue ratio can be updated in an on-the-fly manner without storing the nuclear fission source distributions at all previous iteration cycles for the mean subtraction. Lastly, the effects of degenerate eigenvalues are investigated and solutions are provided.
Modelling of white paints optical degradation using Mie's theory and Monte Carlo method
NASA Astrophysics Data System (ADS)
Duvignacq, Carole; Hespel, Laurent; Roze, Claude; Girasole, Thierry
2003-09-01
During long term missions, white paints, used as thermal control coatings on satellites, are severely damaged by the effect of space environment. Reflectance spectra, showing broad absorption bands, are characteristic of the coatings optical degradation. In this paper, a numerical model simulating optical degradation of white paints is presented. This model uses Mie's theory, coupled with a random walk Monte Carlo procedure. With materials like white paints, we are faced to several major difficulties: high pigment charging rate, binder absorption, etc.. The problem is even worse in the case of irradiated paints. In parallel with the description of the basis of the model, we will make an overview of the encountered problems. Simulation results are presented and discussed, in the case of zinc oxide/PDMS type white paints, irradiated by 45 keV protons, in accordance with geostationary orbit environment conditions. The effects of the optical properties of the pigment, the pigment volume concentration, the absorption by the binder on hemispherical reflectance are examined. Comparisons are made with experimental results, and the interest of such a numerical code for the study of high charged materials degradation is discussed.
NASA Astrophysics Data System (ADS)
Bui, Khoa; Papavassiliou, Dimitrios
2012-02-01
The effective thermal conductivity (Keff) of carbon nanotube (CNT) composites is affected by the thermal boundary resistance (TBR) and by the dispersion pattern and geometry of the CNTs. We have previously modeled CNTs as straight cylinders and found that the TBR between CNTs (TBRCNT-CNT) can suppress Keff at high volume fractions of CNTs [1]. Effective medium theory results assume that the CNTs are in a perfect dispersion state and exclude the TBRCNT-CNT [2]. In this work, we report on the development of an algorithm for generating CNTs with worm-like geometry in 3D, and with different persistence lengths. These worm-like CNTs are then randomly placed in a periodic box representing a realistic state, since the persistence length of a CNT can be obtained from microscopic images. The use of these CNT geometries in conjunction with off-lattice Monte Carlo simulations [1] in order to study the effective thermal properties of nanocomposites will be discussed, as well as the effects of the persistence length on Keff and comparisons to straight cylinder models. References [1] K. Bui, B.P. Grady, D.V. Papavassiliou, Chem. Phys. Let., 508(4-6), 248-251, 2011 [2] C.W. Nan, G. Liu, Y. Lin, M. Li, App. Phys. Let., 85(16), 3549-3551, 2006
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange.
Hula, Andreas; Montague, P Read; Dayan, Peter
2015-06-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent's preference for equity with their partner, beliefs about the partner's appetite for equity, beliefs about the partner's model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange
Hula, Andreas; Montague, P. Read; Dayan, Peter
2015-01-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent’s preference for equity with their partner, beliefs about the partner’s appetite for equity, beliefs about the partner’s model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials. The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials.more » The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.« less
Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method
NASA Astrophysics Data System (ADS)
Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin
2015-07-01
The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)
Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V
2006-12-31
Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)
Monte Carlo Method in optical diagnostics of skin and skin tissues
NASA Astrophysics Data System (ADS)
Meglinski, Igor V.
2003-12-01
A novel Monte Carlo (MC) technique for photon migration through 3D media with the spatially varying optical properties is presented. The employed MC technique combines the statistical weighting variance reduction and real photon paths tracing schemes. The overview of the results of applications of the developed MC technique in optical/near-infrared reflectance spectroscopy, confocal microscopy, fluorescence spectroscopy, OCT, Doppler flowmetry and Diffusing Wave Spectroscopy (DWS) are presented. In frame of the model skin represents as a complex inhomogeneous multi-layered medium, where the spatial distribution of blood and chromophores are variable within the depth. Taking into account variability of cells structure we represent the interfaces of skin layers as a quasi-random periodic wavy surfaces. The rough boundaries between the layers of different refractive indices play a significant role in the distribution of photons within the medium. The absorption properties of skin tissues in visible and NIR spectral region are estimated by taking into account the anatomical structure of skin as determined from histology, including the spatial distribution of blood vessels, water and melanin content. Model takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. Reasonable estimations for skin blood oxygen saturation and haematocrit are also included. The model is validated against analytic solution of the photon diffusion equation for semi-infinite homogeneous highly scattering medium. The results demonstrate that matching of the refractive index of the medium significantly improves the contrast and spatial resolution of the spatial photon sensitivity profile. It is also demonstrated that when model supplied with reasonable physical and structural parameters of biological tissues the results of skin reflectance spectra simulation agrees reasonably well with the results of in vivo skin spectra measurements.
Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124
NASA Astrophysics Data System (ADS)
Moreau, M.; Buvat, I.; Ammour, L.; Chouin, N.; Kraeber-Bodéré, F.; Chérel, M.; Carlier, T.
2015-03-01
Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM.
NASA Astrophysics Data System (ADS)
Quan, Guotao; Wang, Kan; Yang, Xiaoquan; Deng, Yong; Luo, Qingming; Gong, Hui
2012-08-01
The study of dual-modality technology which combines microcomputed tomography (micro-CT) and fluorescence molecular tomography (FMT) has become one of the main focuses in FMT. However, because of the diversity of the optical properties and irregular geometry for small animals, a reconstruction method that can effectively utilize the high-resolution structural information of micro-CT for tissue with arbitrary optical properties is still one of the most challenging problems in FMT. We develop a micro-CT-guided non-equal voxel Monte Carlo method for FMT reconstruction. With the guidance of micro-CT, precise voxel binning can be conducted on the irregular boundary or region of interest. A modified Laplacian regularization method is also proposed to accurately reconstruct the distribution of the fluorescent yield for non-equal space voxels. Simulations and phantom experiments show that this method not only effectively reduces the loss of high-resolution structural information of micro-CT in irregular boundaries and increases the accuracy of the FMT algorithm in both forward and inverse problems, but the method also has a small Jacobian matrix and a short reconstruction time. At last, we performed small animal imaging to validate our method.
NASA Astrophysics Data System (ADS)
Luyten, J.; Creemers, C.
2008-07-01
Recently, new parameters for the modified embedded atom method (MEAM) were derived for the ternary Pt-Pd-Rh system. In this work, this validated potential is used in conjunction with Monte Carlo (MC) simulations to study segregation to the (1 1 1) surface for the entire phase diagram of this ternary system. At 1400 K, these simulations reveal two distinct regions. In the major part of the phase diagram, Pd is the segregating component. However, close to the binary Pt-Rh axis, a region is observed in which Pt and Pd co-segregate to the surface. This co-segregation occurs only at higher temperatures as it is the result of two competing exothermic segregation reactions.
NASA Astrophysics Data System (ADS)
Luyten, Jan; Schurmans, Maarten; Creemers, Claude; Bunnik, Bouke S.; Kramer, Gert Jan
2007-04-01
In this work, surface segregation in Pt 25Rh 75 alloys is studied by Monte Carlo (MC) simulations, combined with the modified embedded atom method (MEAM). First, for a more accurate description of the interatomic interactions, new MEAM parameters are derived, based on ab initio density functional theory (DFT) data. Subsequently, the temperature dependent surface segregation to the low index single crystal surfaces of a Pt 25Rh 75 alloy is calculated with this new parameter set. The simulation results are then confronted with available experimental and theoretical work. A peculiarity of the Pt-Rh system is the possible presence of a bulk demixing region at lower temperatures. This demixing behaviour is still contested up to now. Our results are in contradiction with such a phase separation behaviour.
NASA Astrophysics Data System (ADS)
Mishchenko, Yuriy
2006-02-01
We suggest an exact approach to help remedy the fermion sign problem in diffusion quantum Monte Carlo simulations. The approach is based on an explicit suppression of symmetric modes in the Schrdinger equation by means of a modified stochastic diffusion process (antisymmetric diffusion process). We introduce this algorithm and illustrate it on potential models in one dimension (1D) and show that there it solves the fermion sign problem exactly and converges to the lowest antisymmetric state of the system. Then, we discuss extensions of this approach to many-dimensional systems on examples of quantum oscillator in 2D-20D and a toy model of three and four fermions on harmonic strings in 2D and 3D. We show that in all these cases our method shows a performance comparable to that of a fixed-node approximation with an exact node.
Ohgoe, Takahiro; Kawashima, Naoki
2011-02-15
We study the supercounterfluid (SCF) states in the two-component hard-core Bose-Hubbard model on a square lattice, using the quantum Monte Carlo method based on the worm (directed-loop) algorithm. Since the SCF state is a state of a pair condensation characterized by {ne}0,=0, and =0, where a and b are the order parameters of the two components, it is important to study behaviors of the pair-correlation function . For this purpose, we propose a choice of the worm head for calculating the pair-correlation function. From this pair correlation, we confirm the Kosterlitz-Thouless character of the SCF phase. The simulation efficiency is also improved in the SCF phase.
Simulation of Mach-Effect Illusion Using Three-Layered Retinal Cell Model and Monte Carlo Method
NASA Astrophysics Data System (ADS)
Ueno, Akinori; Arai, Ken; Miyashita, Osamu
We proposed a novel retinal model capable of simulating Mach-effect, which is known as an optical illusion emphasizing edges of an object. The model was constructed by a rod cell layer, a bipolar cell layer, and a ganglion cell layer. Lateral inhibition and perceptive field networks were introduced between the layers, respectively. Photoelectric conversion for a single photon incidence at each rod cell was defined as an equation, and the input to the model was simulated as a distribution of transmitted photons through the input image for consecutive incidences by Monte Carlo method. Since this model successfully simulated not only Mach-effect illusion, but also DOG-like (Difference of Gaussian like) profile for a spot light incidence, the model was considered to form functionally the perceptive field of the retinal ganglion cell.
Ganesh, P; Kim, Jeongnim; Park, Changwon; Yoon, Mina; Reboredo, Fernando A; Kent, Paul R C
2014-12-01
Highly accurate diffusion quantum Monte Carlo (QMC) studies of the adsorption and diffusion of atomic lithium in AA-stacked graphite are compared with van der Waals-including density functional theory (DFT) calculations. Predicted QMC lattice constants for pure AA graphite agree with experiment. Pure AA-stacked graphite is shown to challenge many van der Waals methods even when they are accurate for conventional AB graphite. Highest overall DFT accuracy, considering pure AA-stacked graphite as well as lithium binding and diffusion, is obtained by the self-consistent van der Waals functional vdW-DF2, although errors in binding energies remain. Empirical approaches based on point charges such as DFT-D are inaccurate unless the local charge transfer is assessed. The results demonstrate that the lithium-carbon system requires a simultaneous highly accurate description of both charge transfer and van der Waals interactions, favoring self-consistent approaches. PMID:26583215
Wirawan, Rahadi; Waris, Abdul; Djamal, Mitra; Handayani, Gunawan
2015-04-16
The spectrum of gamma energy absorption in the NaI crystal (scintillation detector) is the interaction result of gamma photon with NaI crystal, and it’s associated with the photon gamma energy incoming to the detector. Through a simulation approach, we can perform an early observation of gamma energy absorption spectrum in a scintillator crystal detector (NaI) before the experiment conducted. In this paper, we present a simulation model result of gamma energy absorption spectrum for energy 100-700 keV (i.e. 297 keV, 400 keV and 662 keV). This simulation developed based on the concept of photon beam point source distribution and photon cross section interaction with the Monte Carlo method. Our computational code has been successfully predicting the multiple energy peaks absorption spectrum, which derived from multiple photon energy sources.
NASA Astrophysics Data System (ADS)
Nasser, Hassan; Marre, Olivier; Cessac, Bruno
2013-03-01
Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.
Yamamoto, Takehisa; Tsutsui, Toshiyuki; Nishiguchi, Akiko; Kobayashi, Sota; Tsukamoto, Kenji; Saito, Takehiko; Mase, Masaji; Okamatsu, Masatoshi
2007-06-01
In June 2005, an outbreak of avian influenza (AI) caused by a low pathogenic H5N2 virus was identified in Japan. A serological surveillance was conducted because the infected chickens did not show any clinical signs. The Markov Chain Monte Carlo Method was used to evaluate the performances of serological HI and AGP tests because there was not enough time when the surveillance was initiated to conduct a test evaluation. The sensitivity of the AGP test (0.67) was lower than that of the HI test (0.99), while the specificities were high for both tests (0.96 for AGP and 0.90 for HI). Based on the low sensitivity of the AGP test, the HI test was used for primary screening in later stages of the epidemic. PMID:17611370
A method for photon beam Monte Carlo multileaf collimator particle transport
NASA Astrophysics Data System (ADS)
Siebers, Jeffrey V.; Keall, Paul J.; Kim, Jong Oh; Mohan, Radhe
2002-09-01
Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/-1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/-1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/-1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry.
NASA Technical Reports Server (NTRS)
Palmer, Grant; Prabhu, Dinesh; Cruden, Brett A.
2013-01-01
The 2013-2022 Decaedal survey for planetary exploration has identified probe missions to Uranus and Saturn as high priorities. This work endeavors to examine the uncertainty for determining aeroheating in such entry environments. Representative entry trajectories are constructed using the TRAJ software. Flowfields at selected points on the trajectories are then computed using the Data Parallel Line Relaxation (DPLR) Computational Fluid Dynamics Code. A Monte Carlo study is performed on the DPLR input parameters to determine the uncertainty in the predicted aeroheating, and correlation coefficients are examined to identify which input parameters show the most influence on the uncertainty. A review of the present best practices for input parameters (e.g. transport coefficient and vibrational relaxation time) is also conducted. It is found that the 2(sigma) - uncertainty for heating on Uranus entry is no more than 2.1%, assuming an equilibrium catalytic wall, with the uncertainty being determined primarily by diffusion and H(sub 2) recombination rate within the boundary layer. However, if the wall is assumed to be partially or non-catalytic, this uncertainty may increase to as large as 18%. The catalytic wall model can contribute over 3x change in heat flux and a 20% variation in film coefficient. Therefore, coupled material response/fluid dynamic models are recommended for this problem. It was also found that much of this variability is artificially suppressed when a constant Schmidt number approach is implemented. Because the boundary layer is reacting, it is necessary to employ self-consistent effective binary diffusion to obtain a correct thermal transport solution. For Saturn entries, the 2(sigma) - uncertainty for convective heating was less than 3.7%. The major uncertainty driver was dependent on shock temperature/velocity, changing from boundary layer thermal conductivity to diffusivity and then to shock layer ionization rate as velocity increases. While radiative heating for Uranus entry was negligible, the nominal solution for Saturn computed up to 20% radiative heating at the highest velocity examined. The radiative heating followed a non-normal distribution, with up to a 3x variation in magnitude. This uncertainty is driven by the H(sub 2) dissociation rate, as H(sub 2) that persists in the hot non-equilibrium zone contributes significantly to radiation.
NASA Technical Reports Server (NTRS)
Tsang, L.; Lou, S. H.; Chan, C. H.
1991-01-01
The extended boundary condition method is applied to Monte Carlo simulations of two-dimensional random rough surface scattering. The numerical results are compared with one-dimensional random rough surfaces obtained from the finite-element method. It is found that the mean scattered intensity from two-dimensional rough surfaces differs from that of one dimension for rough surfaces with large slopes.
NASA Technical Reports Server (NTRS)
Jensen, K. A.; Ripoll, J.-F.; Wray, A. A.; Joseph, D.; ElHafi, M.
2004-01-01
Five computational methods for solution of the radiative transfer equation in an absorbing-emitting and non-scattering gray medium were compared on a 2 m JP-8 pool fire. The temperature and absorption coefficient fields were taken from a synthetic fire due to the lack of a complete set of experimental data for fires of this size. These quantities were generated by a code that has been shown to agree well with the limited quantity of relevant data in the literature. Reference solutions to the governing equation were determined using the Monte Carlo method and a ray tracing scheme with high angular resolution. Solutions using the discrete transfer method, the discrete ordinate method (DOM) with both S(sub 4) and LC(sub 11) quadratures, and moment model using the M(sub 1) closure were compared to the reference solutions in both isotropic and anisotropic regions of the computational domain. DOM LC(sub 11) is shown to be the more accurate than the commonly used S(sub 4) quadrature technique, especially in anisotropic regions of the fire domain. This represents the first study where the M(sub 1) method was applied to a combustion problem occurring in a complex three-dimensional geometry. The M(sub 1) results agree well with other solution techniques, which is encouraging for future applications to similar problems since it is computationally the least expensive solution technique. Moreover, M(sub 1) results are comparable to DOM S(sub 4).
NASA Astrophysics Data System (ADS)
Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.
2014-05-01
Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution provided a flow velocity-depth damage curve for a specific land use. More specifically, each WMCLR code execution for the agricultural sector generated a damage curve for a specific crop and for every month of the year, thus relating the damage to any crop with floodwater depth, flow velocity and the growth phase of the crop at the time of flooding. Respectively, each WMCLR code execution for the urban sector developed a damage curve for a specific building type, relating structural damage with floodwater depth and velocity. Furthermore, two techno-economic models were developed in Python programming language, in order to estimate monetary values of flood damages to the rural and the urban sector, respectively. A new Monte Carlo simulation was performed, consisting of multiple executions of the techno-economic code, which generated multiple damage cost estimates. Each execution used the proper WMCLR simulated damage curve. The uncertainty analysis of the damage estimates established the accuracy and reliability of the proposed methodology for the synthetic damage curves' development.
An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates
NASA Astrophysics Data System (ADS)
Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin
2014-03-01
The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.
ERIC Educational Resources Information Center
Carsey, Thomas M.; Harden, Jeffrey J.
2015-01-01
Graduate students in political science come to the discipline interested in exploring important political questions, such as "What causes war?" or "What policies promote economic growth?" However, they typically do not arrive prepared to address those questions using quantitative methods. Graduate methods instructors must
ERIC Educational Resources Information Center
Carsey, Thomas M.; Harden, Jeffrey J.
2015-01-01
Graduate students in political science come to the discipline interested in exploring important political questions, such as "What causes war?" or "What policies promote economic growth?" However, they typically do not arrive prepared to address those questions using quantitative methods. Graduate methods instructors must…
Post-DFT methods for Earth materials: Quantum Monte Carlo simulations of (Mg,Fe)O (Invited)
NASA Astrophysics Data System (ADS)
Driver, K. P.; Militzer, B.; Cohen, R. E.
2013-12-01
(Mg,Fe)O is a major mineral phase in Earth's lower mantle that plays a key role in determining the structural and dynamical properties of deep Earth. A pressure-induced spin-pairing transition of Fe has been the subject of numerous theoretical and experimental studies due to the consequential effects on lower mantle physics. The standard density functional theory (DFT) method does not treat strongly correlated electrons properly and results can have dependence on the choice of exchange-correlation functional. DFT+U, offers significant improvement over standard DFT for treating strongly correlated electrons. Indeed, DFT+U calculations and experiments have narrowed the ambient spin-transition between 40-60 GPa in (Mg,Fe)O. However, DFT+U, is not an ideal method due to dependence on Hubbard U parameter among other approximations. In order to further clarify details of the spin transition, it is necessary to use methods that explicitly treat effects of electron exchange and correlation, such as quantum Monte Carlo (QMC). Here, we will discuss methods of going beyond standard DFT and present QMC results on the (Mg,Fe)O elastic properties and spin-transition pressure in order to benchmark DFT+U results.
Investigation of Collimator Influential Parameter on SPECT Image Quality: a Monte Carlo Study
Banari Bahnamiri, Sh.
2015-01-01
Background Obtaining high quality images in Single Photon Emission Tomography (SPECT) device is the most important goal in nuclear medicine. Because if image quality is low, the possibility of making a mistake in diagnosing and treating the patient will rise. Studying effective factors in spatial resolution of imaging systems is thus deemed to be vital. One of the most important factors in SPECT imaging in nuclear medicine is the use of an appropriate collimator for a certain radiopharmaceutical feature in order to create the best image as it can be effective in the quantity of Full Width at Half Maximum (FWHM) which is the main parameter in spatial resolution. Method In this research, the simulation of the detector and collimator of SPECT imaging device, Model HD3 made by Philips Co. and the investigation of important factors on the collimator were carried out using MCNP-4c code. Results The results of the experimental measurments and simulation calculations revealed a relative difference of less than 5% leading to the confirmation of the accuracy of conducted simulation MCNP code calculation. Conclusion This is the first essential step in the design and modelling of new collimators used for creating high quality images in nuclear medicine. PMID:25973410
NASA Astrophysics Data System (ADS)
Benhdech, Yassine; Beaumont, Stéphane; Guédon, Jean-Pierre; Torfeh, Tarraf
2010-04-01
In this paper, we deepen the R&D program named DTO-DC (Digital Object Test and Dosimetric Console), which goal is to develop an efficient, accurate and full method to achieve dosimetric quality control (QC) of radiotherapy treatment planning system (TPS). This method is mainly based on Digital Test Objects (DTOs) and on Monte Carlo (MC) simulation using the PENELOPE code [1]. These benchmark simulations can advantageously replace experimental measures typically used as reference for comparison with TPS calculated dose. Indeed, the MC simulations rather than dosimetric measurements allow contemplating QC without tying treatment devices and offer in many situations (i.p. heterogeneous medium, lack of scattering volume...) better accuracy compared to dose measurements with classical dosimetry equipment of a radiation therapy department. Furthermore using MC simulations and DTOs, i.e. a totally numerical QC tools, will also simplify QC implementation, and enable process automation; this allows radiotherapy centers to have a more complete and thorough QC. The program DTO-DC was established primarily on ELEKTA accelerator (photons mode) using non-anatomical DTOs [2]. Today our aim is to complete and apply this program on VARIAN accelerator (photons and electrons mode) using anatomical DTOs. First, we developed, modeled and created three anatomical DTOs in DICOM format: 'Head and Neck', Thorax and Pelvis. We parallelized the PENELOPE code using MPI libraries to accelerate their calculation, we have modeled in PENELOPE geometry Clinac head of Varian Clinac 2100CD (photons mode). Then, to implement this method, we calculated the dose distributions in Pelvis DTO using PENELOPE and ECLIPSE TPS. Finally we compared simulated and calculated dose distributions employing the relative difference proposed by Venselaar [3]. The results of this work demonstrate the feasibility of this method that provides a more accurate and easily achievable QC. Nonetheless, this method, implemented on ECLIPSE TPS version 8.6.15, has revealed large discrepancies (11%) between Monte Carlo simulations and the AAA algorithm calculations especially in equivalent air and equivalent bone areas. Our work will be completed by dose measurement (with film) in the presence of heterogeneous environment to validate MC simulations.
Shi, C. Y.; Xu, X. George; Stabin, Michael G.
2008-07-15
Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.
NASA Astrophysics Data System (ADS)
Miller, G. L.; Lu, D.; Ye, M.; Curtis, G. P.; Mendes, B. S.; Draper, D.
2010-12-01
Parametric uncertainty in groundwater modeling is commonly assessed using the first-order-second-moment method, which yields the linear confidence/prediction intervals. More advanced techniques are able to produce the nonlinear confidence/prediction intervals that are more accurate than the linear intervals for nonlinear models. However, both the methods are restricted to certain assumptions such as normality in model parameters. We developed a Markov Chain Monte Carlo (MCMC) method to directly investigate the parametric distributions and confidence/prediction intervals. The MCMC results are used to evaluate accuracy of the linear and nonlinear confidence/prediction intervals. The MCMC method is applied to nonlinear surface complexation models developed by Kohler et al. (1996) to simulate reactive transport of uranium (VI). The breakthrough data of Kohler et al. (1996) obtained from a series of column experiments are used as the basis of the investigation. The calibrated parameters of the models are the equilibrium constants of the surface complexation reactions and fractions of functional groups. The Morris method sensitivity analysis shows that all of the parameters exhibit highly nonlinear effects on the simulation. The MCMC method is combined with traditional optimization method to improve computational efficiency. The parameters of the surface complexation models are first calibrated using a global optimization technique, multi-start quasi-Newton BFGS, which employs an approximation to the Hessian. The parameter correlation is measured by the covariance matrix computed via the Fisher information matrix. Parameter ranges are necessary to improve convergence of the MCMC simulation, even when the adaptive Metropolis method is used. The MCMC results indicate that the parameters do not necessarily follow a normal distribution and that the nonlinear intervals are more accurate than the linear intervals for the nonlinear surface complexation models. In comparison with the linear and nonlinear prediction intervals, the prediction intervals of MCMC are more robust to simulate the breakthrough curves that are not used for the parameter calibration and estimation of parameter distributions.
Monte Carlo Library Least Square (MCLLS) Method for Multiple Radioactive Particle Tracking in BPR
NASA Astrophysics Data System (ADS)
Wang, Zhijian; Lee, Kyoung; Gardner, Robin
2010-03-01
In This work, a new method of radioactive particles tracking is proposed. An accurate Detector Response Functions (DRF's) was developed from MCNP5 to generate library for NaI detectors with a significant speed-up factor of 200. This just make possible for the idea of MCLLS method which is used for locating and tracking the radioactive particle in a modular Pebble Bed Reactor (PBR) by searching minimum Chi-square values. The method was tested to work pretty good in our lab condition with a six 2" X 2" NaI detectors array only. This method was introduced in both forward and inverse ways. A single radioactive particle tracking system with three collimated 2" X 2" NaI detectors is used for benchmark purpose.
NASA Astrophysics Data System (ADS)
Querol, A.; Gallardo, S.; Rdenas, J.; Verd, G.
2015-11-01
In environmental radioactivity measurements, High Purity Germanium (HPGe) detectors are commonly used due to their excellent resolution. Efficiency calibration of detectors is essential to determine activity of radionuclides. The Monte Carlo method has been proved to be a powerful tool to complement efficiency calculations. In aged detectors, efficiency is partially deteriorated due to the dead layer increasing and consequently, the active volume decreasing. The characterization of the radiation transport in the dead layer is essential for a realistic HPGe simulation. In this work, the MCNP5 code is used to calculate the detector efficiency. The F4MESH tally is used to determine the photon and electron fluence in the dead layer and the active volume. The energy deposited in the Ge has been analyzed using the *F8 tally. The F8 tally is used to obtain spectra and to calculate the detector efficiency. When the photon fluence and the energy deposition in the crystal are known, some unfolding methods can be used to estimate the activity of a given source. In this way, the efficiency is obtained and serves to verify the value obtained by other methods.
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo H.; Good, Brian; Noebe, Ronald D.; Honecy, Frank; Abel, Phillip
1999-01-01
Large-scale simulations of dynamic processes at the atomic level have developed into one of the main areas of work in computational materials science. Until recently, severe computational restrictions, as well as the lack of accurate methods for calculating the energetics, resulted in slower growth in the area than that required by current alloy design programs. The Computational Materials Group at the NASA Lewis Research Center is devoted to the development of powerful, accurate, economical tools to aid in alloy design. These include the BFS (Bozzolo, Ferrante, and Smith) method for alloys (ref. 1) and the development of dedicated software for large-scale simulations based on Monte Carlo- Metropolis numerical techniques, as well as state-of-the-art visualization methods. Our previous effort linking theoretical and computational modeling resulted in the successful prediction of the microstructure of a five-element intermetallic alloy, in excellent agreement with experimental results (refs. 2 and 3). This effort also produced a complete description of the role of alloying additions in intermetallic binary, ternary, and higher order alloys (ref. 4).
Novel phase-space Monte-Carlo method for quench dynamics in 1D and 2D spin models
NASA Astrophysics Data System (ADS)
Pikovski, Alexander; Schachenmayer, Johannes; Rey, Ana Maria
2015-05-01
An important outstanding problem is the effcient numerical computation of quench dynamics in large spin systems. We propose a semiclassical method to study many-body spin dynamics in generic spin lattice models. The method, named DTWA, is based on a novel type of discrete Monte-Carlo sampling in phase-space. We demonstare the power of the technique by comparisons with analytical and numerically exact calculations. It is shown that DTWA captures the dynamics of one- and two-point correlations 1D systems. We also use DTWA to study the dynamics of correlations in 2D systems with many spins and different types of long-range couplings, in regimes where other numerical methods are generally unreliable. Computing spatial and time-dependent correlations, we find a sharp change in the speed of propagation of correlations at a critical range of interactions determined by the system dimension. The investigations are relevant for a broad range of systems including solids, atom-photon systems and ultracold gases of polar molecules, trapped ions, Rydberg, and magnetic atoms. This work has been financially supported by JILA-NSF-PFC-1125844, NSF-PIF-1211914, ARO, AFOSR, AFOSR-MURI.
Gallardo, S; Rdenas, J; Verd, G; Querol, A
2009-01-01
Quality Control (QC) parameters for an X-ray tube such as Half Value Layer (HVL), homogeneity factor and mean photon energy, can be obtained from the primary beam spectrum. A direct Monte Carlo (MC) simulation has been used to obtain this spectrum. Indirect spectrometry procedures such as Compton scattering have been also experimentally utilized since direct spectrometry causes a pile-up effect in detectors. As well the Compton spectrometry has been simulated with the MC method. In both cases unfolding techniques shall be applied to obtain the primary spectrum. Two unfolding methods (TSVD and Spectro-X) have been analyzed. Results are compared each other and with reference values taken from IPEM Report 78 catalogue. Direct MC simulation is a good approximation to obtain the primary spectrum and hence the QC parameters. TSVD is a better unfolding method for the scattered spectrum than the Spectro-X code. An improvement of the methodology to obtain QC parameters is important in Biomedical Engineering (BME) applications due to the wide use of X-ray tubes. PMID:19964756
Evaluation of multiple-scattering influence on lidar measurement by itinerative Monte Carlo method
NASA Astrophysics Data System (ADS)
Wang, Xuan; Boselli, Antonella; Bruscaglioni, Piero; D'Avino, Loredana; Gambacorta, Antonia; Ismaelli, Andrea; Velotta, Raffaele; Zaccanti, Giovanni
2004-01-01
Multiple-scattering effects sometime bias the ground-based lidar measurements, in particular for density aerosol and cirrus cloud. Both analytical and Montecarlo methods are very useful tools to study this influence. However, for analytical solution, it needs to make some hypotheses and the Montecarlo simulation is only a forward method. In this paper, an itinerative method is introduced based on Montecarlo simulation. Both extinction and backscattering coefficients, obtained by Raman lidar, are corrected for the multiple-scattering influence. For the typical cirrus cloud, the error of the multiple-scattering influence on extinction can be as large as 100%. However, it is negligible of the influence on backscattering coefficient. Therefore, the lidar ratio is also sensitive to the multiple-scattering effect.
A New Monte Carlo Filtering Method for the Diagnosis of Mission-Critical Failures
NASA Technical Reports Server (NTRS)
Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen
2009-01-01
Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the settings most likely to cause a mission-critical failure. This research benchmarks two treatment learning methods against standard optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. It is shown that these treatment learners are both faster than traditional methods and show demonstrably better results.
NASA Astrophysics Data System (ADS)
Cepeda, Jose; Luna, Byron Quan; Nadim, Farrokh
2013-04-01
An essential component of a quantitative landslide hazard assessment is establishing the extent of the endangered area. This task requires accurate prediction of the run-out behaviour of a landslide, which includes the estimation of the run-out distance, run-out width, velocities, pressures, and depth of the moving mass and the final configuration of the deposits. One approach to run-out modelling is to reproduce accurately the dynamics of the propagation processes. A number of dynamic numerical models are able to compute the movement of the flow over irregular topographic terrains (3-D) controlled by a complex interaction between mechanical properties that may vary in space and time. Given the number of unknown parameters and the fact that most of the rheological parameters cannot be measured in the laboratory or field, the parametrization of run-out models is very difficult in practice. For this reason, the application of run-out models is mostly used for back-analysis of past events and very few studies have attempted to achieve forward predictions. Consequently all models are based on simplified descriptions that attempt to reproduce the general features of the failed mass motion through the use of parameters (mostly controlling shear stresses at the base of the moving mass) which account for aspects not explicitly described or oversimplified. The uncertainties involved in the run-out process have to be approached in a stochastic manner. It is of significant importance to develop methods for quantifying and properly handling the uncertainties in dynamic run-out models, in order to allow a more comprehensive approach to quantitative risk assessment. A method was developed to compute the variation in run-out intensities by using a dynamic run-out model (MassMov2D) and a probabilistic framework based on a Monte Carlo simulation in order to analyze the effect of the uncertainty of input parameters. The probability density functions of the rheological parameters were generated and sampled leading to a large number of run-out scenarios. In the application of the Monte Carlo method, random samples were generated from the input probability distributions that fitted a Gaussian copula distribution. Each set of samples was used as input to model simulation and the resulting outcome was a spatially displayed intensity map. These maps were created with the results of the probability density functions at each point of the flow track and the deposition zone, having as an output a confidence probability map for the various intensity measures. The goal of this methodology is that the results (in terms of intensity characteristics) can be linked directly to vulnerability curves associated to the elements at risk.
Turner, Adam C.; Zhang Di; Kim, Hyun J.; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; Primak, Andrew N.; McCollough, Cynthia H.; McNitt-Gray, Michael F.
2009-06-15
The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called ''equivalent'' source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL{sub 1} and HVL{sub 2}) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL{sub 1} and HVL{sub 2} measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL{sub 1} and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types result in simulations with an average root mean square (RMS) error between the measured and simulated values of approximately 5% across all scanner and bowtie filter combinations, all kVps, both phantom sizes, and both measurement positions, while data provided from the manufacturers gave an average RMS error of approximately 12% pooled across all conditions. While there was no statistically significant difference between the two types of equivalent source models, both of these model types were shown to be statistically significantly different from the source model based on manufacturer's data. These results demonstrate that an equivalent source model based only on measured values can be used in place of manufacturer's data for Monte Carlo simulations for MDCT dosimetry.
Çatli, Serap
2015-01-01
High atomic number and density of dental implants leads to major problems at providing an accurate dose distribution in radiotherapy and contouring tumors and organs caused by the artifact in head and neck tumors. The limits and deficiencies of the algorithms using in the treatment planning systems can lead to large errors in dose calculation, and this may adversely affect the patient's treatment. In the present study, four commercial dental implants were used: pure titanium, titanium alloy (Ti-6Al-4V), amalgam, and crown. The effects of dental implants on dose distribution are determined with two methods: pencil beam convolution (PBC) algorithm and Monte Carlo code for 6 MV photon beam. The central axis depth doses were calculated on the phantom for a source-skin distance (SSD) of 100 cm and a 10 × 10 cm2 field using both of algorithms. The results of Monte Carlo method and Eclipse TPS were compared to each other and to those previously reported. In the present study, dose increases in tissue at a distance of 2 mm in front of the dental implants were seen due to the backscatter of electrons for dental implants at 6 MV using the Monte Carlo method. The Eclipse treatment planning system (TPS) couldn't precisely account for the backscatter radiation caused by the dental prostheses. TPS underestimated the back scatter dose and overestimated the dose after the dental implants. The large errors found for TPS in this study are due to the limits and deficiencies of the algorithms. The accuracy of the PBC algorithm of Eclipse TPS was evaluated in comparison to Monte Carlo calculations in con-sideration of the recommendations of the American Association of Physicists in Medicine Radiation Therapy Committee Task Group 65. From the comparisons of the TPS and Monte Carlo calculations, it is verified that the Monte Carlo simula-tion is a good approach to derive the dose distribution in heterogeneous media. PMID:26699323
A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP)
NASA Astrophysics Data System (ADS)
Bitar, A.; Lisbona, A.; Thedrez, P.; Sai Maurel, C.; LeForestier, D.; Barbet, J.; Bardies, M.
2007-02-01
Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.
NASA Astrophysics Data System (ADS)
Gubernatis, J. E.
2003-11-01
In a previous article [J. Phys. Chem. 21: 1087 (1953)] a prescription was given for moving from point to point in the configuration space of a system in such a way that averaging over many moves is equivalent to a canonical averaging over configuration space. The prescription is suitable for electronic machine calculations and provides the basis for calculations described elsewhere. The purpose of this paper is to provide a more rigorous proof of the method.
NASA Astrophysics Data System (ADS)
Schirmer, M.; Ghasemizade, M.; Radny, D.
2014-12-01
Many different methods and approaches have been suggested for simulation of preferential flows. However, most of these methods have been tested in lab scales where boundary conditions and material properties are known and under control. The focus of this study is to compare two different approaches for simulating preferential flows in a weighing lysimeter where the scale of simulation is closer to field scales than simulations done in labs. To do so, we applied dual permeability and spatially distributed heterogeneity as two competitive approaches for simulating slow and rapid flow out of a lysimeter. While the dual permeability approach assumes that there is a structure among soil aggregates and that can be captured as a fraction of the porosity, the other method attributes the existence of preferential flows to heterogeneity distributed within the domain. The two aforementioned approaches were used in order to simulate daily recharge values of a lysimeter. The analysis included a calibration phase, which started from March 2012 until March 2013, and a validation phase which lasted a year following the calibration period. The simulations were performed based on the numerical and 3-D physically based model HydroGeoSphere. The nonlinear uncertainty analysis of the results indicate that they are comparable.
Accelerated kinetics of amorphous silicon using an on-the-fly off-lattice kinetic Monte-Carlo method
NASA Astrophysics Data System (ADS)
Joly, Jean-Francois; El-Mellouhi, Fedwa; Beland, Laurent Karim; Mousseau, Normand
2011-03-01
The time evolution of a series of well relaxed amorphous silicon models was simulated using the kinetic Activation-RelaxationTechnique (kART), an on-the-fly off-lattice kinetic Monte Carlo method. This novel algorithm uses the ART nouveau algorithm to generate activated events and links them with local topologies. It was shown to work well for crystals with few defects but this is the first time it is used to study an amorphous material. A parallel implementation allows us to increase the speed of the event generation phase. After each KMC step, new searches are initiated for each new topology encountered. Well relaxed amorphous silicon models of 1000 atoms described by a modified version of the empirical Stillinger-Weber potential were used as a starting point for the simulations. Initial results show that the method is faster by orders of magnitude compared to conventional MD simulations up to temperatures of 500 K. Vacancy-type defects were also introduced in this system and their stability and lifetimes are calculated.
Shin, Younghoon; Kwon, Hyuk-Sang
2016-03-21
We propose a Monte Carlo (MC) method based on a direct photon flux recording strategy using inhomogeneous, meshed rodent brain atlas. This MC method was inspired by and dedicated to fibre-optics-based optogenetic neural stimulations, thus providing an accurate and direct solution for light intensity distributions in brain regions with different optical properties. Our model was used to estimate the 3D light intensity attenuation for close proximity between an implanted optical fibre source and neural target area for typical optogenetics applications. Interestingly, there are discrepancies with studies using a diffusion-based light intensity prediction model, perhaps due to use of improper light scattering models developed for far-field problems. Our solution was validated by comparison with the gold-standard MC model, and it enabled accurate calculations of internal intensity distributions in an inhomogeneous near light source domain. Thus our strategy can be applied to studying how illuminated light spreads through an inhomogeneous brain area, or for determining the amount of light required for optogenetic manipulation of a specific neural target area. PMID:26914289
Coppens, Joris E; Franssen, Luuk; van den Berg, Thomas J T P
2006-01-01
Recently the psychophysical compensation comparison method was developed for routine measurement of retinal stray light. The subject's responses to a series of two-alternative-forced-choice trials are analyzed using a maximum-likelihood (ML) approach assuming some fixed shape for the psychometric function (PF). This study evaluates the reliability of the method using Monte-Carlo simulations. Various sampling strategies were investigated, including the two-phase sampling strategy that is used in a commercially available instrument. Results are given for the effective dynamic range and measurement accuracy. The effect of a mismatch of the shape of the PF of an observer and the fixed shape used in the ML analysis was analyzed. Main outcomes are that the two-phase sampling scheme gives good precision (Standard deviation = 0.07 logarithmic units on average) for estimation of the stray light value. Bias is virtually zero. Furthermore, a reliability index was derived from the responses and found to be effective. PMID:17092159
NASA Astrophysics Data System (ADS)
Harvey, J.-P.; Gheribi, A. E.; Chartrand, P.
2011-08-01
The design of multicomponent alloys used in different applications based on specific thermo-physical properties determined experimentally or predicted from theoretical calculations is of major importance in many engineering applications. A procedure based on Monte Carlo simulations (MCS) and the thermodynamic integration (TI) method to improve the quality of the predicted thermodynamic properties calculated from classical thermodynamic calculations is presented in this study. The Gibbs energy function of the liquid phase of the Cu-Zr system at 1800 K has been determined based on this approach. The internal structure of Cu-Zr melts and amorphous alloys at different temperatures, as well as other physical properties were also obtained from MCS in which the phase trajectory was modeled by the modified embedded atom model formalism. A rigorous comparison between available experimental data and simulated thermo-physical properties obtained from our MCS is presented in this work. The modified quasichemical model in the pair approximation was parameterized using the internal structure data obtained from our MCS and the precise Gibbs energy function calculated at 1800 K from the TI method. The predicted activity of copper in Cu-Zr melts at 1499 K obtained from our thermodynamic optimization was corroborated by experimental data found in the literature. The validity of the amplitude of the entropy of mixing obtained from the in silico procedure presented in this work was analyzed based on the thermodynamic description of hard sphere mixtures.
Harvey, J-P; Gheribi, A E; Chartrand, P
2011-08-28
The design of multicomponent alloys used in different applications based on specific thermo-physical properties determined experimentally or predicted from theoretical calculations is of major importance in many engineering applications. A procedure based on Monte Carlo simulations (MCS) and the thermodynamic integration (TI) method to improve the quality of the predicted thermodynamic properties calculated from classical thermodynamic calculations is presented in this study. The Gibbs energy function of the liquid phase of the Cu-Zr system at 1800 K has been determined based on this approach. The internal structure of Cu-Zr melts and amorphous alloys at different temperatures, as well as other physical properties were also obtained from MCS in which the phase trajectory was modeled by the modified embedded atom model formalism. A rigorous comparison between available experimental data and simulated thermo-physical properties obtained from our MCS is presented in this work. The modified quasichemical model in the pair approximation was parameterized using the internal structure data obtained from our MCS and the precise Gibbs energy function calculated at 1800 K from the TI method. The predicted activity of copper in Cu-Zr melts at 1499 K obtained from our thermodynamic optimization was corroborated by experimental data found in the literature. The validity of the amplitude of the entropy of mixing obtained from the in silico procedure presented in this work was analyzed based on the thermodynamic description of hard sphere mixtures. PMID:21895194
NASA Astrophysics Data System (ADS)
Kholodtsova, Maria N.; Loschenov, Victor B.; Daul, Christian; Blondel, Walter
2014-05-01
Determining the optical properties of biological tissues in vivo from spectral intensity measurements performed at their surface is still a challenge. Based on spectroscopic data acquired, the aim is to solve an inverse problem, where the optical parameter values of a forward model are to be estimated through optimization procedure of some cost function. In many cases it is an ill-posed problem because of small numbers of measures, errors on experimental data, nature of a forward model output data, which may be affected by statistical noise in the case of Monte Carlo (MC) simulation or approximated values for short inter-fibre distances (for Diffusion Equation Approximation (DEA)). In case of optical biopsy, spatially resolved diffuse reflectance spectroscopy is one simple technique that uses various excitation-toemission fibre distances to probe tissue in depths. The aim of the present contribution is to study the characteristics of some classically used cost function, optimization methods (Levenberg-Marquardt algorithm) and how it is reaching global minimum when using MC and/or DEA approaches. Several methods of smoothing filters and fitting were tested on the reflectance curves, I(r), gathered from MC simulations. It was obtained that smoothing the initial data with local regression weighted second degree polynomial and then fitting the data with double exponential decay function decreases the probability of the inverse algorithm to converge to local minima close to the initial point of first guess.
NASA Astrophysics Data System (ADS)
Yeh, C. Y.; Lee, C. C.; Chao, T. C.; Lin, M. H.; Lai, P. A.; Liu, F. H.; Tung, C. J.
2014-02-01
This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0-2.3%). The mean difference for the conformity index was 0.01 (range: 0.0-0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting.
NASA Astrophysics Data System (ADS)
Shahrabi, Mohammad; Tavakoli-Anbaran, Hossien
2015-02-01
Calculation of dosimetry parameters by TG-60 approach for beta sources and TG-43 approach for gamma sources can help to design brachytherapy sources. In this work, TG-60 dosimetry parameters are calculated for the Sm-153 brachytherapy seed using the Monte Carlo simulation approach. The continuous beta spectrum of Sm-153 and probability density are applied to simulate the Sm-153 source. Sm-153 is produced by neutron capture during the 152Sm( n,)153Sm reaction in reactors. The Sm-153 radionuclide decays by beta rays followed by gamma-ray emissions with half-life of 1.928 days. Sm-153 source is simulated in a spherical water phantom to calculate the deposited energy and geometry function in the intended points. The Sm-153 seed consists of 20% samarium, 30% calcium and 50% silicon, in cylindrical shape with density 1.76gr/cm^3. The anisotropy function and radial dose function were calculated at 0-4mm radial distances relative to the seed center and polar angles of 0-90 degrees. The results of this research are compared with the results of Taghdiri et al. (Iran. J. Radiat. Res. 9, 103 (2011)). The final beta spectrum of Sm-153 is not considered in their work. Results show significant relative differences even up to 5 times for anisotropy functions at 0.6, 1 and 2mm distances and some angles. MCNP4C Monte Carlo code is applied in both in the present paper and in the above-mentioned one.
Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes
Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.
2002-09-11
The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.
Monte Carlo-assisted voxel source kernel method (MAVSK) for internal beta dosimetry.
Liu, A; Williams, L E; Wong, J Y; Raubitschek, A A
1998-05-01
A method is described for the determination of patient-specific organ beta doses given a known cumulated internal radioactivity distribution. A voxel source kernel for 90Y analogous to the point source function was simulated. Dose to each organ of interest could then be estimated by convolving the voxel source kernel with the patient's 3-D volume with known radioactivity assigned to each voxel. The dose calculation on eight organs took less than 1 min per patient using a Sun Sparc10 workstation. PMID:9639305
NASA Astrophysics Data System (ADS)
Sabouri, P.; Bidaud, A.; Dabiran, S.; Lecarpentier, D.; Ferragut, F.
2014-04-01
The development of tools for nuclear data uncertainty propagation in lattice calculations are presented. The Total Monte Carlo method and the Generalized Perturbation Theory method are used with the code DRAGON to allow propagation of nuclear data uncertainties in transport calculations. Both methods begin the propagation of uncertainties at the most elementary level of the transport calculation - the Evaluated Nuclear Data File. The developed tools are applied to provide estimates for response uncertainties of a PWR cell as a function of burnup.
Finch, W. Holmes; Bolin, Jocelyn H.; Kelley, Ken
2014-01-01
Classification using standard statistical methods such as linear discriminant analysis (LDA) or logistic regression (LR) presume knowledge of group membership prior to the development of an algorithm for prediction. However, in many real world applications members of the same nominal group, might in fact come from different subpopulations on the underlying construct. For example, individuals diagnosed with depression will not all have the same levels of this disorder, though for the purposes of LDA or LR they will be treated in the same manner. The goal of this simulation study was to examine the performance of several methods for group classification in the case where within group membership was not homogeneous. For example, suppose there are 3 known groups but within each group two unknown classes. Several approaches were compared, including LDA, LR, classification and regression trees (CART), generalized additive models (GAM), and mixture discriminant analysis (MIXDA). Results of the study indicated that CART and mixture discriminant analysis were the most effective tools for situations in which known groups were not homogeneous, whereas LDA, LR, and GAM had the highest rates of misclassification. Implications of these results for theory and practice are discussed. PMID:24904445
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B; Jia, Xun
2015-05-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3?HU and from 78 to 9?HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30?s including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299
NASA Astrophysics Data System (ADS)
Liu, Qinming; Dong, Ming; Peng, Ying
2012-10-01
Health prognosis of equipment is considered as a key process of the condition based maintenance strategy. It contributes to reduce the related risks and the maintenance costs of equipment and improve the availability, the reliability and the security of equipment. However, equipment often operates under dynamically operational and environmental conditions, and its lifetime is generally described by the monitored nonlinear time-series data. Equipment subjects to high levels of uncertainty and unpredictability so that effective methods for its online health prognosis are still in need now. This paper addresses prognostic methods based on hidden semi-Markov model (HSMM) by using sequential Monte Carlo (SMC) method. HSMM is applied to obtain the transition probabilities among health states and the state durations. The SMC method is adopted to describe the probability relationships between health states and the monitored observations of equipment. This paper proposes a novel multi-step-ahead health recognition algorithm based on joint probability distribution to recognize the health states of equipment and its health state change point. A new online health prognostic method is also developed to estimate the residual useful lifetime (RUL) values of equipment. At the end of the paper, a real case study is used to demonstrate the performance and potential applications of the proposed methods for online health prognosis of equipment.
NASA Astrophysics Data System (ADS)
Zhai, Peng-Wang; Kattawar, George W.; Yang, Ping
2008-03-01
We have developed a powerful 3D Monte Carlo code, as part of the Radiance in a Dynamic Ocean (RaDyO) project, which can compute the complete effective Mueller matrix at any detector position in a completely inhomogeneous turbid medium, in particular, a coupled atmosphere-ocean system. The light source can be either passive or active. If the light source is a beam of light, the effective Mueller matrix can be viewed as the complete impulse response Green matrix for the turbid medium. The impulse response Green matrix gives us an insightful way to see how each region of a turbid medium affects every other region. The present code is validated with the multicomponent approach for a plane-parallel system and the spherical harmonic discrete ordinate method for the 3D scalar radiative transfer system. Furthermore, the impulse response relation for a box-type cloud model is studied. This 3D Monte Carlo code will be used to generate impulse response Green matrices for the atmosphere and ocean, which act as inputs to a hybrid matrix operator-Monte Carlo method. The hybrid matrix operator-Monte Carlo method will be presented in part II of this paper.
Zhai, Peng-Wang; Kattawar, George W; Yang, Ping
2008-03-10
We have developed a powerful 3D Monte Carlo code, as part of the Radiance in a Dynamic Ocean (RaDyO) project, which can compute the complete effective Mueller matrix at any detector position in a completely inhomogeneous turbid medium, in particular, a coupled atmosphere-ocean system. The light source can be either passive or active. If the light source is a beam of light, the effective Mueller matrix can be viewed as the complete impulse response Green matrix for the turbid medium. The impulse response Green matrix gives us an insightful way to see how each region of a turbid medium affects every other region. The present code is validated with the multicomponent approach for a plane-parallel system and the spherical harmonic discrete ordinate method for the 3D scalar radiative transfer system. Furthermore, the impulse response relation for a box-type cloud model is studied. This 3D Monte Carlo code will be used to generate impulse response Green matrices for the atmosphere and ocean, which act as inputs to a hybrid matrix operator-Monte Carlo method. The hybrid matrix operator-Monte Carlo method will be presented in part II of this paper. PMID:18327274
Song, Sangha; Elgezua, Inko; Kobayashi, Yo; Fujie, Masakatsu G
2013-01-01
In biomedical, Monte-carlo simulation is commonly used for simulation of light diffusion in tissue. But, most of previous studies did not consider a radial beam LED as light source. Therefore, we considered characteristics of a radial beam LED and applied them on MC simulation as light source. In this paper, we consider 3 characteristics of radial beam LED. The first is an initial launch area of photons. The second is an incident angle of a photon at an initial photon launching area. The third is the refraction effect according to contact area between LED and a turbid medium. For the verification of the MC simulation, we compared simulation and experimental results. The average of the correlation coefficient between simulation and experimental results is 0.9954. Through this study, we show an effective method to simulate light diffusion on tissue with characteristics for radial beam LED based on MC simulation. PMID:24109615
Percolation of the site random-cluster model by Monte Carlo method.
Wang, Songsong; Zhang, Wanzhou; Ding, Chengxiang
2015-08-01
We propose a site random-cluster model by introducing an additional cluster weight in the partition function of the traditional site percolation. To simulate the model on a square lattice, we combine the color-assignation and the Swendsen-Wang methods to design a highly efficient cluster algorithm with a small critical slowing-down phenomenon. To verify whether or not it is consistent with the bond random-cluster model, we measure several quantities, such as the wrapping probability Re, the percolating cluster density P?, and the magnetic susceptibility per site ?p, as well as two exponents, such as the thermal exponent yt and the fractal dimension yh of the percolating cluster. We find that for different exponents of cluster weight q=1.5, 2, 2.5, 3, 3.5, and 4, the numerical estimation of the exponents yt and yh are consistent with the theoretical values. The universalities of the site random-cluster model and the bond random-cluster model are completely identical. For larger values of q, we find obvious signatures of the first-order percolation transition by the histograms and the hysteresis loops of percolating cluster density and the energy per site. Our results are helpful for the understanding of the percolation of traditional statistical models. PMID:26382364
Wagner, F; Hart, R; Fink, R; Classen, M
1990-02-01
Using Sellers TT algorithm, primary structure repeats have been described for interferon (IFN)-alpha, -beta 1, and gamma. To reevaluate these results and to extend them to IFN-beta 2 (interleukin-6), a modified algorithm was developed that uses a metric to define the "best" partial homology of two peptide sequences and to compare it to those detected in random permutations of the peptide. Using this approach, the known structural homologies of IFN-alpha with IFN-beta 1 and of human (Hu) IFN-gamma with murine (Mu) IFN-gamma were identified correctly. However, the primary structure repeats in the amino acid sequences of IFN-alpha, -beta 1, and -gamma turned out to be no better than those detectable in random permutations of these sequences. These results were confirmed using a different, nonlinear metric. A previously used approach to demonstrate significance was shown to produce false-positive results. No significant primary structure homologies were detected among IFN-beta 1, -beta 2, and -gamma. In contrast to the amino acid sequence analysis, the DNA sequence of HuIFN-beta 1 contained a significant repeat that had no significant counterpart in MuIFN-beta or in IFN-alpha. In conclusion, some previously reported results obtained with Sellers TT algorithm on amino acid sequences are easily explained as random similarities, and it is therefore strongly recommended that a method like ours should be used to control significance. PMID:1691767
MO-E-18C-02: Hands-On Monte Carlo Project Assignment as a Method to Teach Radiation Physics
Pater, P; Vallieres, M; Seuntjens, J
2014-06-15
Purpose: To present a hands-on project on Monte Carlo methods (MC) recently added to the curriculum and to discuss the students' appreciation. Methods: Since 2012, a 1.5 hour lecture dedicated to MC fundamentals follows the detailed presentation of photon and electron interactions. Students also program all sampling steps (interaction length and type, scattering angle, energy deposit) of a MC photon transport code. A handout structured in a step-by-step fashion guides student in conducting consistency checks. For extra points, students can code a fully working MC simulation, that simulates a dose distribution for 50 keV photons. A kerma approximation to dose deposition is assumed. A survey was conducted to which 10 out of the 14 attending students responded. It compared MC knowledge prior to and after the project, questioned the usefulness of radiation physics teaching through MC and surveyed possible project improvements. Results: According to the survey, 76% of students had no or a basic knowledge of MC methods before the class and 65% estimate to have a good to very good understanding of MC methods after attending the class. 80% of students feel that the MC project helped them significantly to understand simulations of dose distributions. On average, students dedicated 12.5 hours to the project and appreciated the balance between hand-holding and questions/implications. Conclusion: A lecture on MC methods with a hands-on MC programming project requiring about 14 hours was added to the graduate study curriculum since 2012. MC methods produce “gold standard” dose distributions and slowly enter routine clinical work and a fundamental understanding of MC methods should be a requirement for future students. Overall, the lecture and project helped students relate crosssections to dose depositions and presented numerical sampling methods behind the simulation of these dose distributions. Research funding from governments of Canada and Quebec. PP acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290)
NASA Astrophysics Data System (ADS)
Ardila, L. A. Pea; Giorgini, S.
2015-09-01
We investigate the properties of an impurity immersed in a dilute Bose gas at zero temperature using quantum Monte Carlo methods. The interactions between bosons are modeled by a hard-sphere potential with scattering length a , whereas the interactions between the impurity and the bosons are modeled by a short-range, square-well potential where both the sign and the strength of the scattering length b can be varied by adjusting the well depth. We characterize the attractive and the repulsive polaron branch by calculating the binding energy and the effective mass of the impurity. Furthermore, we investigate the structural properties of the bath, such as the impurity-boson contact parameter and the change of the density profile around the impurity. At the unitary limit of the impurity-boson interaction, we find that the effective mass of the impurity remains smaller than twice its bare mass, while the binding energy scales with ?2n2 /3/m , where n is the density of the bath and m is the common mass of the impurity and the bosons in the bath. The implications for the phase diagram of binary Bose-Bose mixtures at low concentrations are also discussed.
Simulation of 12C+12C elastic scattering at high energy by using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Guo, Chen-Lei; Zhang, Gao-Long; Tanihata, I.; Le, Xiao-Yun
2012-03-01
The Monte Carlo method is used to simulate the 12C+12C reaction process. Taking into account the size of the incident 12C beam spot and the thickness of the 12C target, the distributions of scattered 12C on the MWPC and the CsI detectors at a detective distance have been simulated. In order to separate elastic scattering from the inelastic scattering with 4.4 MeV excited energy, we set several variables: the kinetic energy of incident 12C, the thickness of the 12C target, the ratio of the excited state, the wire spacing of the MWPC, the energy resolution of the CsI detector and the time resolution of the plastic scintillator. From the simulation results, the preliminary establishment of the experiment system can be determined to be that the beam size of the incident 12C is phi5 mm, the incident kinetic energy is 200-400 A MeV, the target thickness is 2 mm, the ratio of the excited state is 20%, the flight distance of scattered 12C is 3 m, the energy resolution of the CsI detectors is 1%, the time resolution of the plastic scintillator is 0.5%, and the size of the CsI detectors is 7 cm7 cm, and we need at least 16 CsI detectors to cover a 0 to 5 angular distribution.
Bykov, A V; Priezzhev, A V; Myllylae, Risto A
2011-06-30
Two-dimensional spatial intensity distributions of diffuse scattering of near-infrared laser radiation from a strongly scattering medium, whose optical properties are close to those of skin, are obtained using Monte Carlo simulation. The medium contains a cylindrical inhomogeneity with the optical properties, close to those of blood. It is shown that stronger absorption and scattering of light by blood compared to the surrounding medium leads to the fact that the intensity of radiation diffusely reflected from the surface of the medium under study and registered at its surface has a local minimum directly above the cylindrical inhomogeneity. This specific feature makes the method of spatially-resolved reflectometry potentially applicable for imaging blood vessels and determining their sizes. It is also shown that blurring of the vessel image increases almost linearly with increasing vessel embedment depth. This relation may be used to determine the depth of embedment provided that the optical properties of the scattering media are known. The optimal position of the sources and detectors of radiation, providing the best imaging of the vessel under study, is determined. (biophotonics)
NASA Astrophysics Data System (ADS)
Mizuno, T.; Kanai, Y.; Kataoka, J.; Kiss, M.; Kurita, K.; Pearce, M.; Tajima, H.; Takahashi, H.; Tanaka, T.; Ueno, M.; Umeki, Y.; Yoshida, H.; Arimoto, M.; Axelsson, M.; Marini Bettolo, C.; Bogaert, G.; Chen, P.; Craig, W.; Fukazawa, Y.; Gunji, S.; Kamae, T.; Katsuta, J.; Kawai, N.; Kishimoto, S.; Klamra, W.; Larsson, S.; Madejski, G.; Ng, J. S. T.; Ryde, F.; Rydstrm, S.; Takahashi, T.; Thurston, T. S.; Varner, G.
2009-03-01
The energy response of plastic scintillators (Eljen Technology EJ-204) to polarized soft gamma-ray photons below 100 keV has been studied, primarily for the balloon-borne polarimeter, PoGOLite. The response calculation includes quenching effects due to low-energy recoil electrons and the position dependence of the light collection efficiency in a 20 cm long scintillator rod. The broadening of the pulse-height spectrum, presumably caused by light transportation processes inside the scintillator, as well as the generation and multiplication of photoelectrons in the photomultiplier tube, were studied experimentally and have also been taken into account. A Monte Carlo simulation based on the Geant4 toolkit was used to model photon interactions in the scintillators. When using the polarized Compton/Rayleigh scattering processes previously corrected by the authors, scintillator spectra and angular distributions of scattered polarized photons could clearly be reproduced, in agreement with the results obtained at a synchrotron beam test conducted at the KEK Photon Factory. Our simulation successfully reproduces the modulation factor, defined as the ratio of the amplitude to the mean of the distribution of the azimuthal scattering angles, within 5% (relative). Although primarily developed for the PoGOLite mission, the method presented here is also relevant for other missions aiming to measure polarization from astronomical objects using plastic scintillator scatterers.
Calculation of Nonlinear Thermoelectric Coefficients of InAs1-xSbx Using Monte Carlo Method
Sadeghian, RB; Bahk, JH; Bian, ZX; Shakouri, A
2011-12-28
It was found that the nonlinear Peltier effect could take place and increase the cooling power density when a lightly doped thermoelectric material is under a large electrical field. This effect is due to the Seebeck coefficient enhancement from an electron distribution far from equilibrium. In the nonequilibrium transport regime, the solution of the Boltzmann transport equation in the relaxation-time approximation ceases to apply. The Monte Carlo method, on the other hand, proves to be a capable tool for simulation of semiconductor devices at small scales as well as thermoelectric effects with local nonequilibrium charge distribution. InAs1-xSb is a favorable thermoelectric material for nonlinear operation owing to its high mobility inherited from the binary compounds InSb and InAs. In this work we report simulation results on the nonlinear Peltier power of InAs1-xSb at low doping levels, at room temperature and at low temperatures. The thermoelectric power factor in nonlinear operation is compared with the maximum value that can be achieved with optimal doping in the linear transport regime.
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Kalyvas, N. I.; Valais, I. G.; Kandarakis, I. S.; Panayiotakis, G. S.
2014-03-01
The aim of the present study was to propose a comprehensive method for PET scanners image quality assessment, by the simulation of a thin layer chromatography (TLC) flood source with a previous validated Monte-Carlo (MC) model. The model was developed by using the GATE MC package and reconstructed images were obtained using the STIR software, with cluster computing. The PET scanner simulated was the GE Discovery-ST. The TLC source was immersed in 18F-FDG bath solution (1MBq) in order to assess image quality. The influence of different scintillating crystals on PET scanner's image quality, in terms of the MTF, the NNPS and the DQE, was investigated. Images were reconstructed by the commonly used FBP2D, FPB3DRP and the OSMAPOSL (15 subsets, 3 iterations) reprojection algorithms. The PET scanner configuration, incorporating LuAP crystals, provided the optimum MTF values in both 2D and 3DFBP whereas the corresponding configuration with BGO crystals was found with the higher MTF values after OSMAPOSL. The scanner incorporating BGO crystals were also found with the lowest noise levels and the highest DQE values after all image reconstruction algorithms. The plane source can be also useful for the experimental image quality assessment of PET and SPECT scanners in clinical practice.
Kuss, M.; Markel, T.; Kramer, W.
2011-01-01
Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.
NASA Astrophysics Data System (ADS)
Dioszegi, I.; Rusek, A.; Dane, B. R.; Chiang, I. H.; Meek, A. G.; Dilmanian, F. A.
2011-06-01
Recent upgrades of the MCNPX Monte Carlo code include transport of heavy ions. We employed the new code to simulate the energy and dose distributions produced by carbon beams in rabbit's head in and around a brain tumor. The work was within our experimental technique of interlaced carbon microbeams, which uses two 90 arrays of parallel, thin planes of carbon beams (microbeams) interlacing to produce a solid beam at the target. A similar version of the method was earlier developed with synchrotron-generated x-ray microbeams. We first simulated the Bragg peak in high density polyethylene and other materials, where we could compare the calculated carbon energy deposition to the measured data produced at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL). The results showed that new MCNPX code gives a reasonable account of the carbon beam's dose up to 200 MeV/nucleon beam energy. At higher energies, which were not relevant to our project, the model failed to reproduce the Bragg-peak's extent of increasing nuclear breakup tail. In our model calculations we determined the dose distribution along the beam path, including the angular straggling of the microbeams, and used the data for determining the optimal values of beam spacing in the array for producing adequate beam interlacing at the target. We also determined, for the purpose of Bragg-peak spreading at the target, the relative beam intensities of the consecutive exposures with stepwise lower beam energies, and simulated the resulting dose distribution in the spread out Bragg-peak. The details of the simulation methods used and the results obtained are presented.
NASA Astrophysics Data System (ADS)
Chung, T.; Rachman, A.; Yoshimoto, K.
2013-12-01
For the separation of intrinsic (Qi-1) and scattering attenuation (Qs-1) in South Korea, the multiple-lapse time windows analysis using the direct simulation Monte Carlo (DSMC) method (Yoshimoto, 2000) showed that the depth-dependent velocity model divided by crust and mantle fit better than the uniform velocity model (Chung et al., 2010). Among the several models of S-wave velocity, the least residuals were observed for the discontinuous Moho model at 32 km with crustal velocity increasing from 3.5 to 3.8 km. Chung and Yoshimoto (2013), however, reported DSMC modeling with 10km source depth to be the smallest residuals corresponding to average focal depth of data set, and showed the effect of source events to be greater than that of Moho model. This study thus collected 330 ray paths originated from 39 events with around 10 km source depth in South Korea (Fig. 1), and analyzed by using DSMC method as the same way of Chung et al (2010). The substantial reduction value by changing source depth indicates an advantage of the DSMC model over the analytic model. As was the previous study, we confirmed that the residual difference of the Moho model is relatively very small compare to the source depth change. Based on this data, we will examine the focal mechanism effect which was previously failed to observe (Chung and Yoshimoto, 2012). References; Chung, T.W., K. Yoshimoto, and S. Yun, 2010, BSSA, 3183- 3193. Chung, T.W., and K. Yoshimoto, 2012, J.M.M.T, 85-91 (in Korean). Chung, T.W., and K. Yoshimoto, 2013, Geosciences J., in submitted. Yoshimoto, K., 2000, JGR, 6153-6161. Fig. 1. Ray paths of this study
NASA Astrophysics Data System (ADS)
Chen, X.; Rubin, Y.; Baldocchi, D. D.
2005-12-01
Understanding the interactions between soil, plant, and the atmosphere under water-stressed conditions is important for ecosystems where water availability is limited. In such ecosystems, the amount of water transferred from the soil to the atmosphere is controlled not only by weather conditions and vegetation type but also by soil water availability. Although researchers have proposed different approaches to model the impact of soil moisture on plant activities, the parameters involved are difficult to measure. However, using measurements of observed latent heat and carbon fluxes, as well as soil moisture data, Bayesian inversion methods can be employed to estimate the various model parameters. In our study, actual Evapotranspiration (ET) of an ecosystem is approximated by the Priestley-Taylor relationship, with the Priestley-Taylor coefficient modeled as a function of soil moisture content. Soil moisture limitation on root uptake is characterized in a similar manner as the Feddes' model. The inference of Bayesian inversion is processed within the framework of graphical theories. Due to the difficulty of obtaining exact inference, the Markov chain Monte Carlo (MCMC) method is implemented using a free software package, BUGS (Bayesian inference Using Gibbs Sampling). The proposed methodology is applied to a Mediterranean Oak-Savanna FLUXNET site in California, where continuous measurements of actual ET are obtained from eddy-covariance technique and soil moisture contents are monitored by several time domain reflectometry probes located within the footprint of the flux tower. After the implementation of Bayesian inversion, the posterior distributions of all the parameters exhibit enhancement in information compared to the prior distributions. The generated samples based on data in year 2003 are used to predict the actual ET in year 2004 and the prediction uncertainties are assessed in terms of confidence intervals. Our tests also reveal the usefulness of various types of soil moisture data in parameter estimation, which could be used to guide analyses of available data and planning of field data collection activities.
NASA Astrophysics Data System (ADS)
Wen, Xiulan; Xu, Youxiong; Li, Hongsheng; Wang, Fenglin; Sheng, Danghong
2012-09-01
Straightness error is an important parameter in measuring high-precision shafts. New generation geometrical product specification(GPS) requires the measurement uncertainty characterizing the reliability of the results should be given together when the measurement result is given. Nowadays most researches on straightness focus on error calculation and only several research projects evaluate the measurement uncertainty based on "The Guide to the Expression of Uncertainty in Measurement(GUM)". In order to compute spatial straightness error(SSE) accurately and rapidly and overcome the limitations of GUM, a quasi particle swarm optimization(QPSO) is proposed to solve the minimum zone SSE and Monte Carlo Method(MCM) is developed to estimate the measurement uncertainty. The mathematical model of minimum zone SSE is formulated. In QPSO quasi-random sequences are applied to the generation of the initial position and velocity of particles and their velocities are modified by the constriction factor approach. The flow of measurement uncertainty evaluation based on MCM is proposed, where the heart is repeatedly sampling from the probability density function(PDF) for every input quantity and evaluating the model in each case. The minimum zone SSE of a shaft measured on a Coordinate Measuring Machine(CMM) is calculated by QPSO and the measurement uncertainty is evaluated by MCM on the basis of analyzing the uncertainty contributors. The results show that the uncertainty directly influences the product judgment result. Therefore it is scientific and reasonable to consider the influence of the uncertainty in judging whether the parts are accepted or rejected, especially for those located in the uncertainty zone. The proposed method is especially suitable when the PDF of the measurand cannot adequately be approximated by a Gaussian distribution or a scaled and shifted t-distribution and the measurement model is non-linear.
NASA Astrophysics Data System (ADS)
Esler, Kenneth Paul
Path integral Monte Carlo (PIMC) is a quantum-level simulation method based on a stochastic sampling of the many-body thermal density matrix. Utilizing the imaginary-time formulation of Feynman's sum-over-histories, it includes thermal fluctuations and particle correlations in a natural way. Over the past two decades, PIMC has been applied to the study of the electron gas, hydrogen under extreme pressure, and superfluid helium with great success. However, the computational demand scales with a high power of the atomic number, preventing its application to systems containing heavier elements. In this dissertation, we present the methodological developments necessary to apply this powerful tool to these systems. We begin by introducing the PIMC method. We then explain how effective potentials with position-dependent electron masses can be used to significantly reduce the computational demand of the method for heavier elements, while retaining high accuracy. We explain how these pseudohamiltonians can be integrated into the PIMC simulation by computing the density matrix for the electron-ion pair. We then address the difficulties associated with the long-range behavior of the coulomb potential, and improve a method to optimally partition particle interactions into real-space and reciprocal-space summations. We discuss the use of twist-averaged boundary conditions to reduce the finite-size effects in our simulations and the fixed-phase method needed to enforce the boundary conditions. Finally, we explain how a PIMC simulation of the electrons can be coupled to a classical Langevin dynamics simulation of the ions to achieve an efficient sampling of all degrees of freedom. After describing these advancements in methodology, we apply our new technology to fluid sodium near its liquid-vapor critical point. In particular, we explore the microscopic mechanisms which drive the continuous change from a dense metallic liquid to an expanded insulating vapor above the critical temperature. We show that the dynamic aggregation and dissociation of clusters of atoms play a significant role in determining the conductivity and that the formation of these clusters is highly density and temperature dependent. Finally, we suggest several avenues for research to further improve our simulations.
NASA Astrophysics Data System (ADS)
Zapoměl, J.; Stachiv, I.; Ferfecki, P.
2016-01-01
In this paper, a novel procedure of simultaneous measurement of the ultrathin film volumetric density and the Young's modulus utilizing the Monte Carlo probabilistic method combined with the finite-element method (FEM) and the experiments carried out on the suspended micro-/nanomechanical resonator with a deposited thin film under different but controllable axial prestresses is proposed and analyzed. Since the procedure requires detection of only two bending fundamental resonant frequencies of a beam under different axial prestress forces, the impacts of noise and damping on accuracy of the results are minimized and thus it essentially improves its reliability. Then the volumetric mass density and the Young's modulus of thin film are evaluated by means of the FEM based computational simulations and the accuracies of the determined values are estimated utilizing the Monte Carlo probabilistic method which has been incorporated into the computational procedure.
Camp, Nicola J.; Neuhausen, Susan L.; Tiobech, Josepha; Polloi, Anthony; Coon, Hilary; Myles-Worsley, Marina
2001-01-01
Palauans are an isolated population in Micronesia with lifetime prevalence of schizophrenia (SCZD) of 2%, compared to the world rate of ∼1%. The possible enrichment for SCZD genes, in conjunction with the potential for reduced etiological heterogeneity and the opportunity to ascertain statistically powerful extended pedigrees, makes Palauans a population of choice for the mapping of SCZD genes. We have used a Markov-chain Monte Carlo method to perform a genomewide multipoint analysis in seven extended pedigrees from Palau. Robust multipoint parametric and nonparametric linkage (NPL) analyses were performed under three nested diagnostic classifications—core, spectrum, and broad. We observed four regions of interest across the genome. Two of these regions—on chromosomes 2p13-14 (for which, under core diagnostic classification, NPL=6.5 and parametric LOD=4.8) and 13q12-22 (for which, under broad diagnostic classification, parametric LOD=3.6, and, under spectrum diagnostic classification, parametric LOD=3.5)—had evidence for linkage with genomewide significance, after correction for multiple testing; with the current pedigree resource and genotyping, these regions are estimated to be 4.3 cM and 19.75 cM in size, respectively. A third region, with intermediate evidence for linkage, was identified on chromosome 5q22-qter (for which, under broad diagnostic classification, parametric LOD=2.5). The fourth region of interest had only borderline suggestive evidence for linkage (on 3q24-28; for this region, under broad diagnostic classification, parametric LOD=2.0). All regions exhibited evidence for genetic heterogeneity. Our findings provide significant evidence for susceptibility loci on chromosomes 2p13-14 and 13q12-22 and support both a model of genetic heterogeneity and the utility of a broader set of diagnostic classifications in the population from Palau. PMID:11668428
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-09-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions. PMID:26417866
NASA Astrophysics Data System (ADS)
Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George
2014-06-01
An electron-photon coupled Monte Carlo code ARCHER -
NASA Astrophysics Data System (ADS)
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-09-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions.
Li, Jun; Calo, Victor M.
2013-09-15
We present a single-particle Lennard–Jones (L-J) model for CO{sub 2} and N{sub 2}. Simplified L-J models for other small polyatomic molecules can be obtained following the methodology described herein. The phase-coexistence diagrams of single-component systems computed using the proposed single-particle models for CO{sub 2} and N{sub 2} agree well with experimental data over a wide range of temperatures. These diagrams are computed using the Markov Chain Monte Carlo method based on the Gibbs-NVT ensemble. This good agreement validates the proposed simplified models. That is, with properly selected parameters, the single-particle models have similar accuracy in predicting gas-phase properties as more complex, state-of-the-art molecular models. To further test these single-particle models, three binary mixtures of CH{sub 4}, CO{sub 2} and N{sub 2} are studied using a Gibbs-NPT ensemble. These results are compared against experimental data over a wide range of pressures. The single-particle model has similar accuracy in the gas phase as traditional models although its deviation in the liquid phase is greater. Since the single-particle model reduces the particle number and avoids the time-consuming Ewald summation used to evaluate Coulomb interactions, the proposed model improves the computational efficiency significantly, particularly in the case of high liquid density where the acceptance rate of the particle-swap trial move increases. We compare, at constant temperature and pressure, the Gibbs-NPT and Gibbs-NVT ensembles to analyze their performance differences and results consistency. As theoretically predicted, the agreement between the simulations implies that Gibbs-NVT can be used to validate Gibbs-NPT predictions when experimental data is not available.
NASA Astrophysics Data System (ADS)
Nikolopoulos, D.; Kandarakis, I.; Cavouras, D.; Valais, I.; Linardatos, D.; Michail, C.; David, S.; Gaitanis, A.; Nomicos, C.; Louizi, A.
2006-09-01
X-ray absorption and X-ray fluorescence properties of medical imaging scintillating screens were studied by Monte Carlo methods as a function of the incident photon energy and screen-coating thickness. The scintillating materials examined were Gd 2O 2S, (GOS) Gd 2SiO 5 (GSO) YAlO 3 (YAP), Y 3Al 5O 12 (YAG), LuSiO 5 (LSO), LuAlO 3 (LuAP) and ZnS. Monoenergetic photon exposures were modeled in the range from 10 to 100 keV. The corresponding ranges of coating thicknesses of the investigated scintillating screens ranged up to 200 mg cm -2. Results indicated that X-ray absorption and X-ray fluorescence are affected by the incident photon energy and the screen's coating thickness. Regarding incident photon energy, this X-ray absorption and fluorescence was found to exhibit very intense changes near the corresponding K edge of the heaviest element in the screen's scintillating material. Regarding coating thickness, thicker screens exhibited higher X-ray absorption and X-ray fluorescence. Results also indicated that a significant fraction of the generated X-ray fluorescent quanta escape from the scintillating screen. This fraction was found to increase with screen's coating thickness. At the energy range studied, most of the incident photons were found to be absorbed via one-hit photoelectric effect. As a result, the reabsorption of scattered radiation was found to be of rather minor importance; nevertheless this was found to increase with the screen's coating thickness. Differences in X-ray absorption and X-ray fluorescence were found among the various scintillators studied. LSO scintillator was found to be the most attractive material for use in many X-ray imaging applications, exhibiting the best absorption properties in the largest part of the energy range studied. Y-based scintillators were also found to be of significant absorption performance within the low energy ranges.
NASA Astrophysics Data System (ADS)
Jones, Andrew; Thompson, Andrew; Crain, Jason; Müser, Martin H.; Martyna, Glenn J.
2009-04-01
The quantum Drude oscillator (QDO) model, which allows many-body polarization and dispersion to be treated both on an equal footing and beyond the dipole limit, is investigated using two approaches to the linear scaling diffusion Monte Carlo (DMC) technique. The first is a general purpose norm-conserving DMC (NC-DMC) method wherein the number of walkers, N , remains strictly constant thereby avoiding the sudden death or explosive growth of walker populations with an error that vanishes as O(N-1) in the absence of weights. As NC-DMC satisfies detailed balance, a phase space can be defined that permits both an exact trajectory weighting and a fast mean-field trajectory weighting technique to be constructed which can eliminate or reduce the population bias, respectively. The second is a many-body diagrammatic expansion for trial wave functions in systems dominated by strong on-site harmonic coupling and a dense matrix of bilinear coupling constants such as the QDO in the dipole limit; an approximate trial function is introduced to treat two-body interactions outside the dipole limit. Using these approaches, high accuracy is achieved in studies of the fcc-solid phase of the highly polarizable atom, xenon, within the QDO model. It is found that 200 walkers suffice to generate converged results for systems as large as 500 atoms. The quality of QDO predictions compared to experiment and the ability to generate these predictions efficiently demonstrate the feasibility of employing the QDO approach to model long-range forces.
NASA Astrophysics Data System (ADS)
Makri, T.; Yakoumakis, E.; Papadopoulou, D.; Gialousis, G.; Theodoropoulos, V.; Sandilos, P.; Georgiou, E.
2006-10-01
Seeking to assess the radiation risk associated with radiological examinations in neonatal intensive care units, thermo-luminescence dosimetry was used for the measurement of entrance surface dose (ESD) in 44 AP chest and 28 AP combined chest-abdominal exposures of a sample of 60 neonates. The mean values of ESD were found to be equal to 44 16 Gy and 43 19 Gy, respectively. The MCNP-4C2 code with a mathematical phantom simulating a neonate and appropriate x-ray energy spectra were employed for the simulation of the AP chest and AP combined chest-abdominal exposures. Equivalent organ dose per unit ESD and energy imparted per unit ESD calculations are presented in tabular form. Combined with ESD measurements, these calculations yield an effective dose of 10.2 3.7 Sv, regardless of sex, and an imparted energy of 18.5 6.7 J for the chest radiograph. The corresponding results for the combined chest-abdominal examination are 14.7 7.6 Sv (males)/17.2 7.6 Sv (females) and 29.7 13.2 J. The calculated total risk per radiograph was low, ranging between 1.7 and 2.9 per million neonates, per film, and being slightly higher for females. Results of this study are in good agreement with previous studies, especially in view of the diversity met in the calculation methods.
NASA Astrophysics Data System (ADS)
Radaev, A. I.; Schurovskaya, M. V.
2015-12-01
The choice of the spatial nodalization for the calculation of the power density and burnup distribution in a research reactor core with fuel assemblies of the IRT-3M and VVR-KN type using the program based on the Monte Carlo code is described. The influence of the spatial nodalization on the results of calculating basic neutronic characteristics and calculation time is investigated.
Wang, L; Fourkal, E; Hayes, S; Jin, L; Ma, C
2014-06-01
Purpose: To study the dosimetric difference resulted in using the pencil beam algorithm instead of Monte Carlo (MC) methods for tumors adjacent to the skull. Methods: We retrospectively calculated the dosimetric differences between RT and MC algorithms for brain tumors treated with CyberKnife located adjacent to the skull for 18 patients (total of 27 tumors). The median tumor sizes was 0.53-cc (range 0.018-cc to 26.2-cc). The absolute mean distance from the tumor to the skull was 2.11 mm (range - 17.0 mm to 9.2 mm). The dosimetric variables examined include the mean, maximum, and minimum doses to the target, the target coverage (TC) and conformality index. The MC calculation used the same MUs as the RT dose calculation without further normalization and 1% statistical uncertainty. The differences were analyzed by tumor size and distance from the skull. Results: The TC was generally reduced with the MC calculation (24 out of 27 cases). The average difference in TC between RT and MC was 3.3% (range 0.0% to 23.5%). When the TC was deemed unacceptable, the plans were re-normalized in order to increase the TC to 99%. This resulted in a 6.9% maximum change in the prescription isodose line. The maximum changes in the mean, maximum, and minimum doses were 5.4 %, 7.7%, and 8.4%, respectively, before re-normalization. When the TC was analyzed with regards to target size, it was found that the worst coverage occurred with the smaller targets (0.018-cc). When the TC was analyzed with regards to the distance to the skull, there was no correlation between proximity to the skull and TC between the RT and MC plans. Conclusions: For smaller targets (< 4.0-cc), MC should be used to re-evaluate the dose coverage after RT is used for the initial dose calculation in order to ensure target coverage.
NASA Astrophysics Data System (ADS)
Jiang, F.-J.
2011-01-01
Motivated by the so-called cubical regime in magnon chiral perturbation theory, we propose a method to calculate the low-energy constant, namely, the spin-wave velocity c of spin-(1)/(2) antiferromagnets with O(N) symmetry in a Monte Carlo simulation. Specifically, we suggest that c can be determined by c=L/? when the squares of the spatial and temporal winding numbers are tuned to be the same in the Monte Carlo calculations. Here, ? and L are the inverse temperature and the box size used in the simulations when this condition is met. We verify the validity of this idea by simulating the quantum spin-(1)/(2) XY model. The c obtained by using the squares of winding numbers is given by c=1.1348(5)Ja, which is consistent with the known values of c in the literature. Unlike other conventional approaches, our idea provides a direct method to measure c. Further, by simultaneously fitting our Monte Carlo data of susceptibilities ?11 and spin susceptibilities ? to their theoretical predictions from magnon chiral perturbation theory, we find c is given by c=1.1347(2)Ja, which agrees with the one we obtain by the method of using the squares of winding numbers. The low-energy constant magnetization density M and spin stiffness ? of the quantum spin-(1)/(2) XY model are determined as well, and are given by M=0.43561(1)/a2 and ?=0.26974(5)J, respectively. Thanks to the prediction power of magnon chiral perturbation theory, which places a very restricted constraint among the low-energy constants for the model considered here, the accuracy of M we present in this study is much more precise than previous Monte Carlo results.
NASA Astrophysics Data System (ADS)
Daz, N. Cornejo; Vargas, M. Jurado
2008-02-01
We present the new improved version of our Monte Carlo program DETEFF for detector efficiency calibration in gamma-ray spectrometry. It can be applied to a wide range of sample geometries commonly used for measurements with coaxial gamma-ray detectors: point, rectangular, disk, cylindrical, and Marinelli sources (the last being newly included in this version). The program is a dedicated code, designed specifically for computation of gamma-ray detector efficiency. Therefore, it is more user-friendly and less time consuming than most multi-purpose programs that are intended for a wide range of applications. The comparison of efficiency values obtained with DETEFF and MCNP4C for a typical HpGe detector and for energies between 40 and 1800 keV for point, cylindrical, and Marinelli geometries gave acceptable results, with relative deviations <2% for most energies. The validity of the program was also tested by comparing the DETEFF-calculated efficiency values with those obtained experimentally using a coaxial HpGe detector for different sources (point, disk, and 250 mL Marinelli beaker) which contain 241Am, 109Cd, 57Co, 139Ce, 85Sr, 137Cs, 88Y, and 60Co. The calculated values were in good agreement with the experimental efficiencies for the three geometries considered, with the relative deviations generally being below 3.0%. These results and those obtained during the application of the previous versions indicate the program's suitability as a tool for the efficiency calibration of coaxial gamma-ray detectors, especially in routine measurements such as environmental monitoring.
NASA Astrophysics Data System (ADS)
Hernandez, F.; Liang, X.
2014-12-01
Given the inherent uncertainty in almost all of the variables involved, recent research is re-addressing the problem of calibrating hydrologic models from a stochastic perspective: the focus is shifting from finding a single parameter configuration that minimizes the model error, to approximating the maximum likelihood multivariate probability distribution of the parameters. To this end, Markov chain Monte Carlo (MCMC) formulations are widely used, where the distribution is defined as a smoothed ensemble of particles or members, each of which represents a feasible parameterization. However, the updating of these ensembles needs to strike a careful balance so that the particles adequately resemble the real distribution without either clustering or drifting excessively. In this study, we explore the implementation of two techniques that attempt to improve the quality of the resulting ensembles, both for the approximation of the model parameters and of the unknown states, in a dual calibration-data assimilation framework. The first feature of our proposed algorithm, in an effort to keep members from clustering on areas of high likelihood in light of the observations, is the introduction of diversity-inducing operators after each resampling. This approach has been successfully used before, and here we aim at testing additional operators which are also borrowed from the Evolutionary Computation literature. The second feature is a novel arrangement of the particles into two alternative data structures. The first one is a non-sorted Pareto population which favors 1) particles with high likelihood, and 2) particles that introduce a certain level of heterogeneity. The second structure is a partitioned array, in which each partition requires its members to have increasing levels of divergence from the particles in the areas of larger likelihood. Our newly proposed algorithm will be evaluated and compared to traditional MCMC methods in terms of convergence speed, and the ability of adequately representing the target probability distribution while making an efficient use of the available members. Two calibration scenarios will be carried out, one with invariant model parameter settings, and another one allowing the parameters to be modified through time along with the estimates of the model states.
NASA Astrophysics Data System (ADS)
Zhong, Jiaqiang
Homogeneous condensation in free-expanding plumes has been observed in the last several decades and is important because it may cause contamination problems on spacecraft surfaces. Experimental studies on the homogeneous condensation in supersonic jets have been conducted since the 1970's. Empirical scaling laws were found by Hagena, and extensive Rayleigh scattering intensity data sets were measured along the condensation plume centerline. However, it is difficult to measure cluster size and number density distribution in an operational plume. Most of the available modeling of condensation processes is for ground-based facilities and therefore uses the continuum approach. Since free-expanding jets are mostly in the transitional to rarefied regimes, a kinetic approach should be used to simulate condensation coupled gas expansion plumes. The direct simulation Monte Carlo (DSMC) method is considered as the main tool in this work for the numerical modeling of homogeneous condensation in free-expanding plumes. To simulate condensation flow, we must address the processes of cluster nucleation, growth, decay and collisions with other clusters and monomers as well as the gas kinetics. Therefore, microscopic models including nucleation, sticking and non-sticking collision, and evaporation models need to be specifically developed and integrated into the traditional DSMC code to simulate condensation behavior in this work. To better understand basic cluster models and interactions between clusters and molecules, the molecular dynamics (MD) approach is chosen to simulate cluster-monomer collision processes. Cluster collision cross sections and cluster monomer sticking coefficients, validated by the MD work, are derived and applied to the DSMC microscopic models. The DSMC condensation model was numerically validated by comparing simulation results with analytical solutions in one dimensional test cases and further validated by comparison with Hagena's scaling laws. Comparison of the simulated Rayleigh scattering intensity along the plume centerline with the experimental data is also presented. Finally the methodology is applied to predict water homogeneous condensation in a rocket plume, and the sensitivity of the results to the corrected nucleation rate and cluster coalescence process is discussed.
NASA Astrophysics Data System (ADS)
Lázaro, Ignacio; Ródenas, José; Marques, José G.; Gallardo, Sergio
2014-06-01
Materials in a nuclear reactor are activated by neutron irradiation. When they are withdrawn from the reactor and placed in some storage, the potential dose received by workers in the surrounding area must be taken into account. In previous papers, activation of control rods in a NPP with BWR and dose rates around the storage pool have been estimated using the MCNP5 code based on the Monte Carlo method. Models were validated comparing simulation results with experimental measurements. As the activation is mostly produced in stainless steel components of control rods the activation model can be also validated by means of experimental measurements on a stainless steel sample after being irradiated in a reactor. This has been done in the Portuguese Research Reactor at Instituto Tecnológico e Nuclear. The neutron activation has been calculated by two different methods, Monte Carlo and CINDER'90, and results have been compared. After irradiation, dose rates at the water surface of the reactor pool were measured, with the irradiated stainless steel sample submerged at different positions under water. Experimental measurements have been compared with simulation results using Monte Carlo. The comparison shows a good agreement confirming the validation of models.
Ross, D C
1995-01-01
In randomized clinical trials, patients are not differentially assigned to treatments by severity because available methods of data analysis are sensitive to regression to the mean and yield biased estimates of treatment effect. However, a method proposed by Robbins and Zhang provides consistent estimates of treatment effect in clinical trials, even when the more severely ill are assigned to the active treatment and the less severely ill are assigned to a placebo. This method was assessed by Monte Carlo trials. All combinations of two models of drug effect, five true score distributions, four magnitudes of error variance, and four sample sizes were assessed. The method works sufficiently well that its use should be considered. Further, the method gives correct results even when the regression discontinuity method fails. The method was also compared with a standard analysis of variance of difference scores using real psychiatric data from two ordinary randomized trials; similar results were obtained by both methods. PMID:8847658
Channon, H A; Hamilton, A J; D'Souza, D N; Dunshea, F R
2016-06-01
Monte Carlo simulation was investigated as a potential methodology to estimate sensory tenderness, flavour and juiciness scores of pork following the implementation of key pathway interventions known to influence eating quality. Correction factors were established using mean data from published studies investigating key production, processing and cooking parameters. Probability distributions of correction factors were developed for single pathway parameters only, due to lack of interaction data. Except for moisture infusion, ageing period, aitchbone hanging and cooking pork to an internal temperature of >74°C, only small shifts in the mean of the probability distributions of correction factors were observed for the majority of pathway parameters investigated in this study. Output distributions of sensory scores, generated from Monte Carlo simulations of input distributions of correction factors and for individual pigs, indicated that this methodology may be useful in estimating both the shift and variability in pork eating traits when different pathway interventions are applied. PMID:26869282
A Guide to Monte Carlo Simulations in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.; Binder, Kurt
2014-11-01
1. Introduction; 2. Some necessary background; 3. Simple sampling Monte Carlo methods; 4. Importance sampling Monte Carlo methods; 5. More on importance sampling Monte Carlo methods for lattice systems; 6. Off-lattice models; 7. Reweighting methods; 8. Quantum Monte Carlo methods; 9. Monte Carlo renormalization group methods; 10. Non-equilibrium and irreversible processes; 11. Lattice gauge models: a brief introduction; 12. A brief review of other methods of computer simulation; 13. Monte Carlo simulations at the periphery of physics and beyond; 14. Monte Carlo studies of biological molecules; 15. Outlook; Appendix: listing of programs mentioned in the text; Index.
A Guide to Monte Carlo Simulations in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.; Binder, Kurt
2013-11-01
Preface; 1. Introduction; 2. Some necessary background; 3. Simple sampling Monte Carlo methods; 4. Importance sampling Monte Carlo methods; 5. More on importance sampling Monte Carlo methods of lattice systems; 6. Off-lattice models; 7. Reweighting methods; 8. Quantum Monte Carlo methods; 9. Monte Carlo renormalization group methods; 10. Non-equilibrium and irreversible processes; 11. Lattice gauge models: a brief introduction; 12. A brief review of other methods of computer simulation; 13. Monte Carlo simulations at the periphery of physics and beyond; 14. Monte Carlo studies of biological molecules; 15. Outlook; Appendix; Index.
A Guide to Monte Carlo Simulations in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.; Binder, Kurt
2009-09-01
Preface; 1. Introduction; 2. Some necessary background; 3. Simple sampling Monte Carlo methods; 4. Importance sampling Monte Carlo methods; 5. More on importance sampling Monte Carlo methods of lattice systems; 6. Off-lattice models; 7. Reweighting methods; 8. Quantum Monte Carlo methods; 9. Monte Carlo renormalization group methods; 10. Non-equilibrium and irreversible processes; 11. Lattice gauge models: a brief introduction; 12. A brief review of other methods of computer simulation; 13. Monte Carlo simulations at the periphery of physics and beyond; 14. Monte Carlo studies of biological molecules; 15. Outlook; Appendix; Index.
NASA Astrophysics Data System (ADS)
Macedonia, Michael D.; Maginn, Edward J.
Configurational-bias Monte Carlo sampling techniques have been developed which overcome the difficulties of sampling configuration space efficiently for all-atom molecular models and for branched species represented with united atom models. Implementation details of this sampling scheme are discussed. The accuracy of a united atom forcefield with non-bond parameters optimized for zeolite adsorption and a widely used all-atom forcefield are evaluated by comparison with experimental sorption isotherms of linear and branched hydrocarbons.
NASA Astrophysics Data System (ADS)
Määttänen, Anni; Douspis, Marian
2015-04-01
In the last years several datasets on deposition mode ice nucleation in Martian conditions have showed that the effectiveness of mineral dust as a condensation nucleus decreases with temperature (Iraci et al., 2010; Phebus et al., 2011; Trainer et al., 2009). Previously, nucleation modelling in Martian conditions used only constant values of this so-called contact parameter, provided by the few studies previously published on the topic. The new studies paved the way for possibly more realistic way of predicting ice crystal formation in the Martian environment. However, the caveat of these studies (Iraci et al., 2010; Phebus et al., 2011) was the limited temperature range that inhibits using the provided (linear) equations for the contact parameter temperature dependence in all conditions of cloud formation on Mars. One wide temperature range deposition mode nucleation dataset exists (Trainer et al., 2009), but the used substrate was silicon, which cannot imitate realistically the most abundant ice nucleus on Mars, mineral dust. Nevertheless, this dataset revealed, thanks to measurements spanning from 150 to 240 K, that the behaviour of the contact parameter as a function of temperature was exponential rather than linear as suggested by previous work. We have tried to combine the previous findings to provide realistic and practical formulae for application in nucleation and atmospheric models. We have analysed the three cited datasets using a Monte Carlo Markov Chain (MCMC) method. The used method allows us to test and evaluate different functional forms for the temperature dependence of the contact parameter. We perform a data inversion by finding the best fit to the measured data simultaneously at all points for different functional forms of the temperature dependence of the contact angle m(T). The method uses a full nucleation model (Määttänen et al., 2005; Vehkamäki et al., 2007) to calculate the observables at each data point. We suggest one new and test several m(T) dependencies. Two of these may be used to avoid unphysical behaviour (m > 1) when m(T) is implemented in heterogeneous nucleation and cloud models. However, more measurements are required to fully constrain the m(T) dependencies. We show the importance of large temperature range datasets for constraining the asymptotic behaviour of m(T), and we call for more experiments in a large temperature range with well-defined particle sizes or size distributions, for different IN types and nucleating vapours. This study (Määttänen and Douspis, 2014) provides a new framework for analysing heterogeneous nucleation datasets. The results provide, within limits of available datasets, well-behaving m(T) formulations for nucleation and cloud modelling. Iraci, L. T., et al. (2010). Icarus 210, 985-991. Määttänen, A., et al. (2005). J. Geophys. Res. 110, E02002. Määttänen, A. and Douspis, M. (2014). GeoResJ 3-4 , 46-55. Phebus, B. D., et al. (2011). J. Geophys. Res. 116, 4009. Trainer, M. G., et al. (2009). J. Phys. Chem C 113 , 2036-2040. Vehkamäki, H., et al. (2007). Atmos. Chem. Phys. 7, 309-313.
NASA Astrophysics Data System (ADS)
Andricioaei, Ioan; Straub, John E.; Voter, Arthur F.
2001-04-01
The "Smart Walking" Monte Carlo algorithm is examined. In general, due to a bias imposed by the interbasin trial move, the algorithm does not satisfy detailed balance. While it has been shown that it can provide good estimates of equilibrium averages for certain potentials, for other potentials the estimates are poor. A modified version of the algorithm, Smart Darting Monte Carlo, which obeys the detailed balance condition, is proposed. Calculations on a one-dimensional model potential, on a Lennard-Jones cluster and on the alanine dipeptide demonstrate the accuracy and promise of the method for deeply quenched systems.
Wu, Yunzhao; Tang, Zesheng
2014-01-01
In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface. PMID:24526892
Wong, Un-Hong; Wu, Yunzhao; Wong, Hon-Cheng; Liang, Yanyan; Tang, Zesheng
2014-01-01
In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface. PMID:24526892
Clouvas, A; Xanthos, S; Antonopoulos-Domis, M; Silva, J
1998-02-01
A Monte Carlo based method for the conversion of an in-situ gamma-ray spectrum obtained with a portable Ge detector to photon flux energy distribution is proposed. The spectrum is first stripped of the partial absorption and cosmic-ray events leaving only the events corresponding to the full absorption of a gamma ray. Applying to the resulting spectrum the full absorption efficiency curve of the detector determined by calibrated point sources and Monte Carlo simulations, the photon flux energy distribution is deduced. The events corresponding to partial absorption in the detector are determined by Monte Carlo simulations for different incident photon energies and angles using the CERN's GEANT library. Using the detector's characteristics given by the manufacturer as input it is impossible to reproduce experimental spectra obtained with point sources. A transition zone of increasing charge collection efficiency has to be introduced in the simulation geometry, after the inactive Ge layer, in order to obtain good agreement between the simulated and experimental spectra. The functional form of the charge collection efficiency is deduced from a diffusion model. PMID:9450590
NASA Astrophysics Data System (ADS)
Salo, Heikki; Karjalainen, Raine
2003-08-01
The scattering properties of particulate rings with volume filling factors in the interval D=0.001-0.3 are studied, with photometric Monte Carlo ray tracing simulations combining the advantages of direct (photons followed from the source) and indirect methods (brightness as seen from the observing direction). Besides vertically homogeneous models, ranging from monolayers to classical many-particle thick rings, particle distributions obtained from dynamical simulations are studied, possessing a nonuniform vertical profile and a power law distribution of particle sizes. Self-gravity is not included to assure homogeneity in planar directions. Our main goal is to check whether the moderately flattened ring models predicted by dynamical simulations (with central plane D>0.1) are consistent with the basic photometric properties of Saturn's rings seen in ground-based observations, including the brightening near zero phase angle (opposition effect), and the brightening of the B-ring with increasing elevation angle (tilt effect). Our photometric simulations indicate that dense rings are typically brighter in reflected light than those with D?0, due to enhanced single scattering. For a vertically illuminated layer of identical particles this enhancement amounts at intermediate viewing elevations to roughly 1+2 D. Increased single scattering is also obtained for low elevation illumination, further augmented at low phase angles ? by the opposition brightening when D increases: the simulated opposition effect agrees very well with the Lumme and Bowell (1981, Astron. J. 86, 1694-1704) theoretical formula. For large ? the total intensity may also decrease, due to reduced amount of multiple scattering. For the low ( ?=13) and high ( ?=155) phase angle geometries analyzed in Dones et al. (1993, Icarus 105, 184-215) the brightness change for D=0.1 amounts to 20% and -17%, respectively. In the case of an extended size distribution, dynamical simulations indicate that the smallest particles typically occupy a layer several times thicker than the largest particles. Even if the large particles form a dynamically dense system, a narrow opposition peak can arise due to mutual shadowing among the small particles: for example, a size distribution extending about two decades can account for the observed about 1 wide opposition peak, solely in terms of mutual shadowing. The reduced width of the opposition peak for extended size distribution is in accordance with Hapke's (1986, Icarus 67, 264-280) treatment for semi-infinite layers. Due to vertical profile and particle size distribution, the photometric behavior is sensitive to the viewing elevation: this can account for the tilt-effect of the B-ring, as dense and thus bright central parts of the ring become better visible for larger elevation, whereas in the case of smaller elevation, mainly low volume density upper layers are visible. Since multiple scattering is not involved, the explanation works also for albedo well below unity. Inclusion of nonzero volume density helps also to model some of the Voyager observations. For example, the discrepancy between predicted and observed brightness at large phase angles for much of the A-ring (Dones et al., 1993, Icarus 105, 184-215) is removed when the enhanced low ? single scattering and reduced large ? multiple scattering is allowed for. Also, a model with vertical thickness increasing with saturnocentric distance offers at least a qualitative explanation for the observed contrast reversal between the inner and outer A-ring in low and high phase Voyager images. Differences in local size distribution and thus on the effective D may also account for the contrast reversal in resonance sites.
AN ASSESSMENT OF MCNP WEIGHT WINDOWS
J. S. HENDRICKS; C. N. CULBERTSON
2000-01-01
The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight windows as well as the former MCNP4B treatment? (2) Does the new MCNP4C weight window generator generate importance functions as well as MCNP4B? (3) How do superimposed mesh weight windows compare to cell-based weight windows? (4) What are the shortcomings of the new MCNP4C weight window generator? Our assessment was carried out with five neutron and photon shielding problems chosen for their demanding variance reduction requirements. The problems were an oil well logging problem, the Oak Ridge fusion shielding benchmark problem, a photon skyshine problem, an air-over-ground problem, and a sample problem for variance reduction.
Betzler, Benjamin R.; Kiedrowski, Brian C.; Brown, Forrest B.; Martin, William R.
2015-08-28
The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing α eigenvalues and eigenvectors in an infinite medium. In this study, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are
Granville, DA; Sawakuchi, GO
2014-08-15
In this work, we demonstrate inconsistencies in commonly used Monte Carlo methods of scoring linear energy transfer (LET) in proton therapy beams. In particle therapy beams, the LET is an important parameter because the relative biological effectiveness (RBE) depends on it. LET is often determined using Monte Carlo techniques. We used a realistic Monte Carlo model of a proton therapy nozzle to score proton LET in spread-out Bragg peak (SOBP) depth-dose distributions. We used three different scoring and calculation techniques to determine average LET at varying depths within a 140 MeV beam with a 4 cm SOBP and a 250 MeV beam with a 10 cm SOBP. These techniques included fluence-weighted (Φ-LET) and dose-weighted average (D-LET) LET calculations from: 1) scored energy spectra converted to LET spectra through a lookup table, 2) directly scored LET spectra and 3) accumulated LET scored ‘on-the-fly’ during simulations. All protons (primary and secondary) were included in the scoring. Φ-LET was found to be less sensitive to changes in scoring technique than D-LET. In addition, the spectral scoring methods were sensitive to low-energy (high-LET) cutoff values in the averaging. Using cutoff parameters chosen carefully for consistency between techniques, we found variations in Φ-LET values of up to 1.6% and variations in D-LET values of up to 11.2% for the same irradiation conditions, depending on the method used to score LET. Variations were largest near the end of the SOBP, where the LET and energy spectra are broader.
Hunt, J G; Dantas, B M; Loureno, M C; Azeredo, A M G
2003-01-01
A Monte Carlo program, Visual Monte Carlo (VMC) in vivo, was written to simulate photon transport through an anthropomorphic phantom and to detect radiation emitted from the phantom. VMC in vivo uses a voxel phantom provided by Yale University and may be used to calibrate in vivo systems. This paper shows the application of VMC in vivo to the measurement of 241Am deposited simultaneously in the thoracic region, the bones, the liver and in the rest of the body. The percentages of 241Am in the four body regions were calculated using the biokinetic models established by the ICRP, for a single intake via inhalation. The four regions of the voxel phantom were then 'contaminated' in accordance with the calculated percentages. The calibration factor of the in vivo system was then obtained. This procedure was repeated for the radionuclide distributions obtained 5, 30, 120, 240 and 360 days after intake. VMC in vivo was also used to calculate the calibration factor of the in vivo system in which the radionuclide was assumed to be deposited only in the lung, as is normally done. The activities calculated with the radionuclide distributed in the four body regions as a factor of time, and the activities calculated with the radionuclide deposited in the lung only are compared. PMID:14527025
NASA Astrophysics Data System (ADS)
Tsai, Hui-Yu; Lin, Yung-Chieh; Tyan, Yeu-Sheng
2014-11-01
The purpose of this study was to evaluate organ doses for individual patients undergoing interventional transcatheter arterial embolization (TAE) for hepatocellular carcinoma (HCC) using measurement-based Monte Carlo simulation and adaptive organ segmentation. Five patients were enrolled in this study after institutional ethical approval and informed consent. Gafchromic XR-RV3 films were used to measure entrance surface dose to reconstruct the nonuniform fluence distribution field as the input data in the Monte Carlo simulation. XR-RV3 films were used to measure entrance surface doses due to their lower energy dependence compared with that of XR-RV2 films. To calculate organ doses, each patient's three-dimensional dose distribution was incorporated into CT DICOM images with image segmentation using thresholding and k-means clustering. Organ doses for all patients were estimated. Our dose evaluation system not only evaluated entrance surface doses based on measurements, but also evaluated the 3D dose distribution within patients using simulations. When film measurements were unavailable, the peak skin dose (between 0.68 and 0.82 of a fraction of the cumulative dose) can be calculated from the cumulative dose obtained from TAE dose reports. Successful implementation of this dose evaluation system will aid radiologists and technologists in determining the actual dose distributions within patients undergoing TAE.
NASA Astrophysics Data System (ADS)
Luo, Zhihuan; Berger, Casey E.; Drut, Joaquín E.
2016-03-01
We study harmonically trapped, unpolarized fermion systems with attractive interactions in two spatial dimensions with spin degeneracies Nf=2 and 4 and N /Nf=1 ,3 ,5 , and 7 particles per flavor. We carry out our calculations using our recently proposed quantum Monte Carlo method on a nonuniform lattice. We report on the ground-state energy and contact for a range of couplings, as determined by the binding energy of the two-body system, and show explicitly how the physics of the Nf-body sector dominates as the coupling is increased.
Park, H.; Densmore, J. D.; Wollaber, A. B.; Knoll, D. A.; Rauenzahn, R. M.
2013-07-01
We have developed a moment-based scale-bridging algorithm for thermal radiative transfer problems. The algorithm takes the form of well-known nonlinear-diffusion acceleration which utilizes a low-order (LO) continuum problem to accelerate the solution of a high-order (HO) kinetic problem. The coupled nonlinear equations that form the LO problem are efficiently solved using a preconditioned Jacobian-free Newton-Krylov method. This work demonstrates the applicability of the scale-bridging algorithm with a Monte Carlo HO solver and reports the computational efficiency of the algorithm in comparison to the well-known Fleck-Cummings algorithm. (authors)
Prokhorov, Alexander
2012-05-01
This paper proposes a three-component bidirectional reflectance distribution function (3C BRDF) model consisting of diffuse, quasi-specular, and glossy components for calculation of effective emissivities of blackbody cavities and then investigates the properties of the new reflection model. The particle swarm optimization method is applied for fitting a 3C BRDF model to measured BRDFs. The model is incorporated into the Monte Carlo ray-tracing algorithm for isothermal cavities. Finally, the paper compares the results obtained using the 3C model and the conventional specular-diffuse model of reflection. PMID:22614407
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo; Moroni, Saverio
2014-09-21
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
NASA Astrophysics Data System (ADS)
Whitmore, Alexander Jason
Concentrating solar power systems are currently the predominant solar power technology for generating electricity at the utility scale. The central receiver system, which is a concentrating solar power system, uses a field of mirrors to concentrate solar radiation onto a receiver where a working fluid is heated to drive a turbine. Current central receiver systems operate on a Rankine cycle, which has a large demand for cooling water. This demand for water presents a challenge for the current central receiver systems as the ideal locations for solar power plants have arid climates. An alternative to the current receiver technology is the small particle receiver. The small particle receiver has the potential to produce working fluid temperatures suitable for use in a Brayton cycle which can be more efficient when pressurized to 0.5 MPa. Using a fused quartz window allows solar energy into the receiver while maintaining a pressurized small particle receiver. In this thesis, a detailed numerical investigation for a spectral, three dimensional, cylindrical glass window for a small particle receiver was performed. The window is 1.7 meters in diameter and 0.0254 meters thick. There are three Monte Carlo Ray Trace codes used within this research. The first MCRT code, MIRVAL, was developed by Sandia National Laboratory and modified by a fellow San Diego State University colleague Murat Mecit. This code produces the solar rays on the exterior surface of the window. The second MCRT code was developed by Steve Ruther and Pablo Del Campo. This code models the small particle receiver, which creates the infrared spectral direction flux on the interior surface of the window used in this work. The third MCRT, developed for this work, is used to model radiation heat transfer within the window itself and is coupled to an energy equation solver to produce a temperature distribution. The MCRT program provides a source term to the energy equation. This in turn, produces a new temperature field for the MCRT program; together the equations are solved iteratively. These iterations repeat until convergence is reached for a steady state temperature field. The energy equation was solved using a finite volume method. The window's thermal conductivity is modeled as a function of temperature. This thermal model is used to investigate the effects of different materials, receiver geometries, interior convection coefficients and exterior convection coefficients. To prevent devitrification and the ultimate failure of the window, the window needs to stay below the devitrification temperature of the material. In addition, the temperature gradients within the window need to be kept to a minimum to prevent thermal stresses. A San Diego State University colleague E-Fann Saung uses these temperature maps to insure that the mounting of the window does not produce thermal stresses which can cause cracking in the brittle fused quartz. The simulations in this thesis show that window temperatures are below the devitrification temperature of the window when there are cooling jets on both surfaces of the window. Natural convection on the exterior window surface was explored and it does not provide adequate cooling; therefore forced convection is required. Due to the low thermal conductivity of the window, the edge mounting thermal boundary condition has little effect on the maximum temperature of the window. The simulations also showed that the solar input flux absorbed less than 1% of the incoming radiation while the window absorbed closer to 20% of the infrared radiation emitted by the receiver. The main source of absorbed power in the window is located directly on the interior surface of the window where the infrared radiation is absorbed. The geometry of the receiver has a large impact on the amount of emitted power which reached the interior surface of the window, and using a conical shaped receiver dramatically reduced the receiver's infrared flux on the window. The importance of internal emission is explored within this research. Internal emission produces a more even emission field throughout the receiver than applying radiation surface emission only. Due to a majority of the infrared receiver re-radiation being absorbed right at the interior surface, the surface emission only approximation method produces lower maximum temperatures.
Domin, D.; Braida, Benoit; Lester Jr., William A.
2008-05-30
This study explores the use of breathing orbital valence bond (BOVB) trial wave functions for diffusion Monte Carlo (DMC). The approach is applied to the computation of the carbon-hydrogen (C-H) bond dissociation energy (BDE) of acetylene. DMC with BOVB trial wave functions yields a C-H BDE of 132.4 {+-} 0.9 kcal/mol, which is in excellent accord with the recommended experimental value of 132.8 {+-} 0.7 kcal/mol. These values are to be compared with DMC results obtained with single determinant trial wave functions, using Hartree-Fock orbitals (137.5 {+-} 0.5 kcal/mol) and local spin density (LDA) Kohn-Sham orbitals (135.6 {+-} 0.5 kcal/mol).
Ustinov, E A; Do, D D
2012-04-01
We present results of application of the kinetic Monte Carlo technique to simulate argon adsorption on a graphite surface at temperatures below and above the triple point. We show that below the triple point the densification of the adsorbed layer with loading results in the rearrangement of molecules to form a hexagonal structure, which is accompanied by the release of an additional heat, associated with this disorder-order transition. This appears as a spike in the plot of the heat of adsorption versus loading at the completion of a monolayer on the surface. To describe the details of the adsorbed phase, we analyzed thermodynamic properties and the effects of temperature on the order-disorder transition of the first layer. PMID:22482575
NASA Astrophysics Data System (ADS)
Kai, Takeshi; Yokoya, Akinari; Ukai, Masatoshi; Fujii, Kentaro; Watanabe, Ritsuko
2015-10-01
The thermalization length and spatial distribution of electrons in liquid water were simulated for initial electron energies ranging from 0.1 eV to 100 keV using a dynamic Monte Carlo code. The results showed that electrons were decelerated for thermalization over a longer time period than was previously predicted. This long thermalization time significantly contributed to the series of processes from initial ionization to hydration. We further studied the particular deceleration process of electrons at an incident energy of 1 eV, focusing on the temporal evolution of total track length, mean traveling distance, and energy distributions of decelerating electrons. The initial prehydration time and thermalization periods were estimated to be approximately 50 and 220 fs, respectively, indicating that the initial prehydration began before or contemporaneously with the thermal equilibrium. Based on these results, the prehydrated electrons were suggested to play an important role during multiple DNA damage induction.
Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas
2011-01-01
Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%?2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10??10 cm2 fields (over 26% passed) and in the bone region for 5??5 and 10??10 cm2 fields (over 64% passed). With the criterion relaxed to 5%?2 mm, the pass rates were over 90% for both AAA and CCC relative to AXB for all energies and fields, with the exception of AAA 18 MV 2.5??2.5 cm2 field, which still did not pass. Conclusions: In heterogeneous media, AXB dose prediction ability appears to be comparable to MC and superior to current clinical convolution methods. The dose differences between AXB and AAA or CCC are mainly in the bone, lung, and interface regions. The spatial distributions of these differences depend on the field sizes and energies. PMID:21776802
Yoshida, Kenichiro; Nishidate, Izumi
2014-01-01
To rapidly derive a result for diffuse reflectance from a multilayered model that is equivalent to that of a Monte-Carlo simulation (MCS), we propose a combination of a layered white MCS and the adding-doubling method. For slabs with various scattering coefficients assuming a certain anisotropy factor and without absorption, we calculate the transition matrices for light flow with respect to the incident and exit angles. From this series of precalculated transition matrices, we can calculate the transition matrices for the multilayered model with the specific anisotropy factor. The relative errors of the results of this method compared to a conventional MCS were less than 1%. We successfully used this method to estimate the chromophore concentration from the reflectance spectrum of a numerical model of skin and in vivo human skin tissue. PMID:25426319
Errea, L.F.; Illescas, Clara; Mendez, L.; Riera, A.; Suarez, J.; Pons, B.
2004-11-01
The accuracy of classical trajectory Monte Carlo treatments of electron capture is studied, by focusing on collisions on H(1s) targets by Li{sup 3+} and Ne{sup 10+} projectiles, treated in a separate paper. We examine how the choice of the initial distribution, and the partition of phase space usually employed to calculate partial cross sections, influence the accuracy of the method. With respect to the former, an improvement over the single-microcanonical choice is advisable, but further refinements based on the electron density are not worthwhile. Regarding the latter, we illustrate the accuracy of the 'binning' method for n>2. We show that classical and semiclassical mechanisms are essentially the same, although at low velocities the method is unable to describe the fall of the cross section.
NASA Astrophysics Data System (ADS)
Haryanto, Freddy
2010-06-01
In medical linear accelerator, the energy parameter of electron plays important role to produce electron beam. The percentage depth dose of electron beams takes account not only on the value of electron's energy, but also on the type of electron's energy. The aims of this work are to carry on the effect of energy parameter of electron on the percentage depth dose of electron beam. Monte Carlo method is chosen in this project, due to the superior of this method for simulating the random process such as the transport particle in matter. The DOSXYZnrc usercode was used to simulate the electron transport in water phantom. Two aspects of electron's energy parameter were investigated using Monte Carlo simulations. In the first aspect, electron energy's value was varied also its spectrum. In the second aspect, the geometry of electron's energy was taken account on. The parallel beam and the point source were chosen as the geometry of The measurements of percentage depth dose were conducted to compare with its simulation. The ionization chamber was used in these measurements. Presentation of the results of this work is given not only based on the shape of the percentage depth dose from the simulation and measurement, but also on the other aspect in its curve. The result of comparison between the simulation and its measurement shows that the shape of its curve depends on the energy value of electron and the type of its energy. The energy value of electron affected the depth maximum of dose.
Haryanto, Freddy
2010-06-22
In medical linear accelerator, the energy parameter of electron plays important role to produce electron beam. The percentage depth dose of electron beams takes account not only on the value of electron's energy, but also on the type of electron's energy. The aims of this work are to carry on the effect of energy parameter of electron on the percentage depth dose of electron beam. Monte Carlo method is chosen in this project, due to the superior of this method for simulating the random process such as the transport particle in matter. The DOSXYZnrc usercode was used to simulate the electron transport in water phantom. Two aspects of electron's energy parameter were investigated using Monte Carlo simulations. In the first aspect, electron energy's value was varied also its spectrum. In the second aspect, the geometry of electron's energy was taken account on. The parallel beam and the point source were chosen as the geometry of The measurements of percentage depth dose were conducted to compare with its simulation. The ionization chamber was used in these measurements. Presentation of the results of this work is given not only based on the shape of the percentage depth dose from the simulation and measurement, but also on the other aspect in its curve. The result of comparison between the simulation and its measurement shows that the shape of its curve depends on the energy value of electron and the type of its energy. The energy value of electron affected the depth maximum of dose.
Asl, Mahsa Noori; Sadremomtaz, Alireza; Bitarafan-Rajabi, Ahmad
2013-01-01
Compton-scattered photons included within the photopeak pulse-height window result in the degradation of SPECT images both qualitatively and quantitatively. The purpose of this study is to evaluate and compare six scatter correction methods based on setting the energy windows in 99mTc spectrum. SIMIND Monte Carlo simulation is used to generate the projection images from a cold-sphere hot-background phantom. For evaluation of different scatter correction methods, three assessment criteria including image contrast, signal-to-noise ratio (SNR) and relative noise of the background (RNB) are considered. Except for the dual-photopeak window (DPW) method, the image contrast of the five cold spheres is improved in the range of 2.7-26%. Among methods considered, two methods show a nonuniform correction performance. The RNB for all of the scatter correction methods is ranged from minimum 0.03 for DPW method to maximum 0.0727 for the three energy window (TEW) method using trapezoidal approximation. The TEW method using triangular approximation because of ease of implementation, good improvement of the image contrast and the SNR for the five cold spheres, and the low noise level is proposed as most appropriate correction method. PMID:24672154
Asl, Mahsa Noori; Sadremomtaz, Alireza; Bitarafan-Rajabi, Ahmad
2013-10-01
Compton-scattered photons included within the photopeak pulse-height window result in the degradation of SPECT images both qualitatively and quantitatively. The purpose of this study is to evaluate and compare six scatter correction methods based on setting the energy windows in (99m)Tc spectrum. SIMIND Monte Carlo simulation is used to generate the projection images from a cold-sphere hot-background phantom. For evaluation of different scatter correction methods, three assessment criteria including image contrast, signal-to-noise ratio (SNR) and relative noise of the background (RNB) are considered. Except for the dual-photopeak window (DPW) method, the image contrast of the five cold spheres is improved in the range of 2.7-26%. Among methods considered, two methods show a nonuniform correction performance. The RNB for all of the scatter correction methods is ranged from minimum 0.03 for DPW method to maximum 0.0727 for the three energy window (TEW) method using trapezoidal approximation. The TEW method using triangular approximation because of ease of implementation, good improvement of the image contrast and the SNR for the five cold spheres, and the low noise level is proposed as most appropriate correction method. PMID:24672154
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
Chen, X; Xing, L; Luxton, G; Bush, K; Azcona, J
2014-06-01
Purpose: Patient-specific QA for VMAT is incapable of providing full 3D dosimetric information and is labor intensive in the case of severe heterogeneities or small-aperture beams. A cloud-based Monte Carlo dose reconstruction method described here can perform the evaluation in entire 3D space and rapidly reveal the source of discrepancies between measured and planned dose. Methods: This QA technique consists of two integral parts: measurement using a phantom containing array of dosimeters, and a cloud-based voxel Monte Carlo algorithm (cVMC). After a VMAT plan was approved by a physician, a dose verification plan was created and delivered to the phantom using our Varian Trilogy or TrueBeam system. Actual delivery parameters (i.e., dose fraction, gantry angle, and MLC at control points) were extracted from Dynalog or trajectory files. Based on the delivery parameters, the 3D dose distribution in the phantom containing detector were recomputed using Eclipse dose calculation algorithms (AAA and AXB) and cVMC. Comparison and Gamma analysis is then conducted to evaluate the agreement between measured, recomputed, and planned dose distributions. To test the robustness of this method, we examined several representative VMAT treatments. Results: (1) The accuracy of cVMC dose calculation was validated via comparative studies. For cases that succeeded the patient specific QAs using commercial dosimetry systems such as Delta- 4, MAPCheck, and PTW Seven29 array, agreement between cVMC-recomputed, Eclipse-planned and measured doses was obtained with >90% of the points satisfying the 3%-and-3mm gamma index criteria. (2) The cVMC method incorporating Dynalog files was effective to reveal the root causes of the dosimetric discrepancies between Eclipse-planned and measured doses and provide a basis for solutions. Conclusion: The proposed method offers a highly robust and streamlined patient specific QA tool and provides a feasible solution for the rapidly increasing use of VMAT treatments in the clinic.