a New Method for Neutron Capture Therapy (nct) and Related Simulation by MCNP4C Code
NASA Astrophysics Data System (ADS)
Shirazi, Mousavi; Alireza, Seyed; Ali, Taheri
2010-01-01
Neutron capture therapy (NCT) is enumerated as one of the most important methods for treatment of some strong maladies among cancers in medical science thus is unavoidable controlling and protecting instances in use of this science. Among of treatment instances of this maladies with use of nuclear medical science is use of neutron therapy that is one of the most important and effective methods in treatment of cancers. But whereas fast neutrons have too destroyer effects and also sake of protection against additional absorbed energy (absorbed dose) by tissue during neutron therapy and also naught damaging to rest of healthy tissues, should be measured absorbed energy by tissue accurately, because destroyer effects of fast neutrons is almost quintuple more than gamma photons. In this article for neutron therapy act of male's liver has been simulated a system by the Monte Carlo method (MCNP4C code) and also with use of analytical method, thus absorbed dose by this tissue has been obtained for sources with different energies accurately and has been compared results of this two methods together.
Monte carlo simulation of x-ray spectra in diagnostic radiology and mammography using MCNP4C.
Ay, M R; Shahriari, M; Sarkar, S; Adib, M; Zaidi, H
2004-11-01
The general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C) was used for the simulation of x-ray spectra in diagnostic radiology and mammography. The electrons were transported until they slow down and stop in the target. Both bremsstrahlung and characteristic x-ray production were considered in this work. We focus on the simulation of various target/filter combinations to investigate the effect of tube voltage, target material and filter thickness on x-ray spectra in the diagnostic radiology and mammography energy ranges. The simulated x-ray spectra were compared with experimental measurements and spectra calculated by IPEM report number 78. In addition, the anode heel effect and off-axis x-ray spectra were assessed for different anode angles and target materials and the results were compared with EGS4-based Monte Carlo simulations and measured data. Quantitative evaluation of the differences between our Monte Carlo simulated and comparison spectra was performed using student's t-test statistical analysis. Generally, there is a good agreement between the simulated x-ray and comparison spectra, although there are systematic differences between the simulated and reference spectra especially in the K-characteristic x-rays intensity. Nevertheless, no statistically significant differences have been observed between IPEM spectra and the simulated spectra. It has been shown that the difference between MCNP simulated spectra and IPEM spectra in the low energy range is the result of the overestimation of characteristic photons following the normalization procedure. The transmission curves produced by MCNP4C have good agreement with the IPEM report especially for tube voltages of 50 kV and 80 kV. The systematic discrepancy for higher tube voltages is the result of systematic differences between the corresponding spectra. PMID:15584526
Monte Carlo simulation of X-ray spectra and evaluation of filter effect using MCNP4C and FLUKA code.
Taleei, R; Shahriari, M
2009-02-01
The general-purpose MCNP4C and FLUKA codes were used for simulating X-ray spectra. The electrons were transported until they slow down and stop in the target. Both bremsstrahlung and characteristic X-ray production were considered in this work. Tungsten/aluminum combination was used as target/filter in the simulation. The results of two codes were generated in 80, 100, 120 and 140 kV and compared with each other. In order to survey filter effect on X-ray spectra, the attenuation coefficient of filter was calculated in 120 kV. More details of filter effect have been investigated. The results of MCNP4C and FLUKA are comparable in the range of bremsstrahlung spectra, but there are some differences between them especially in specific X-ray peaks. Since the specific peaks have not significant role on image quality, both FLUKA and MCNP4C codes are fairly appropriate for production of X-ray spectra and evaluating image quality, absorbed dose and improvement in filter design. PMID:19054680
Development and validation of MCNP4C-based Monte Carlo simulator for fan- and cone-beam x-ray CT.
Ay, Mohammad Reza; Zaidi, Habib
2005-10-21
An x-ray computed tomography (CT) simulator based on the Monte Carlo N-particle radiation transport computer code (MCNP4C) was developed for simulation of both fan- and cone-beam CT scanners. A user-friendly interface running under Matlab 6.5.1 creates the scanner geometry at different views as MCNP4C's input file. The full simulation of x-ray tube, phantom and detectors with single-slice, multi-slice and flat detector configurations was considered. The simulator was validated through comparison with experimental measurements of different nonuniform phantoms with varying sizes on both a clinical and a small-animal CT scanner. There is good agreement between the simulated and measured projections and reconstructed images. Thereafter, the effects of bow-tie filter, phantom size and septa length on scatter distribution in fan-beam CT were studied in detail. The relative difference between detected total, primary and scatter photons for septa length varying between 0 and 95 mm is 11.2%, 1.9% and 84.1%, respectively, whereas the scatter-to-primary ratio decreases by 83.8%. The developed simulator is a powerful tool for evaluating the effect of physical, geometrical and other design parameters on scanner performance and image quality in addition to offering a versatile tool for investigating potential artefacts and correction schemes when using CT-based attenuation correction on dual-modality PET/CT units. PMID:16204878
Standard Neutron, Photon, and Electron Data Libraries for MCNP4C.
Energy Science and Technology Software Center (ESTSC)
2004-02-16
Version 03 US DOE 10CFR810 Jurisdiction. DLC-200/MCNPDATA is for use with Versions 4C and and 4C2 of the MCNP transport code. This data library provides a comprehensive set of cross sections for a wide range of radiation transport applications using the Monte Carlo code package CCC-700/MCNP4C. See Appendix G of the MCNP report LA-13709-M for information on the libraries and how to select specific nuclides for use in MCNP. Newer MCNP cross sections from LANLmore » are included in CCC-710/MCNP5.« less
NASA Astrophysics Data System (ADS)
Schaart, Dennis R.; Jansen, Jan Th M.; Zoetelief, Johannes; de Leege, Piet F. A.
2002-05-01
The condensed-history electron transport algorithms in the Monte Carlo code MCNP4C are derived from ITS 3.0, which is a well-validated code for coupled electron-photon simulations. This, combined with its user-friendliness and versatility, makes MCNP4C a promising code for medical physics applications. Such applications, however, require a high degree of accuracy. In this work, MCNP4C electron depth-dose distributions in water are compared with published ITS 3.0 results. The influences of voxel size, substeps and choice of electron energy indexing algorithm are investigated at incident energies between 100 keV and 20 MeV. Furthermore, previously published dose measurements for seven beta emitters are simulated. Since MCNP4C does not allow tally segmentation with the *F8 energy deposition tally, even a homogeneous phantom must be subdivided in cells to calculate the distribution of dose. The repeated interruption of the electron tracks at the cell boundaries significantly affects the electron transport. An electron track length estimator of absorbed dose is described which allows tally segmentation. In combination with the ITS electron energy indexing algorithm, this estimator appears to reproduce ITS 3.0 and experimental results well. If, however, cell boundaries are used instead of segments, or if the MCNP indexing algorithm is applied, the agreement is considerably worse.
Calculation of the store house worker dose in a lost wax foundry using MCNP-4C.
Alegría, Natalia; Legarda, Fernando; Herranz, Margarita; Idoeta, Raquel
2005-01-01
Lost wax casting is an industrial process which permits the transmutation into metal of models made in wax. The wax model is covered with a silicaceous shell of the required thickness and once this shell is built the set is heated and wax melted. Liquid metal is then cast into the shell replacing the wax. When the metal is cool, the shell is broken away in order to recover the metallic piece. In this process zircon sands are used for the preparation of the silicaceous shell. These sands have varying concentrations of natural radionuclides: 238U, 232Th and 235U together with their progenics. The zircon sand is distributed in bags of 50 kg, and 30 bags are on a pallet, weighing 1,500 kg. The pallets with the bags have dimensions 80 cm x 120 cm x 80 cm, and constitute the radiation source in this case. The only pathway of exposure to workers in the store house is external radiation. In this case there is no dust because the bags are closed and covered by plastic, the store house has a good ventilation rate and so radon accumulation is not possible. The workers do not touch with their hands the bags and consequently skin contamination will not take place. In this study all situations of external irradiation to the workers have been considered; transportation of the pallets from vehicle to store house, lifting the pallets to the shelf, resting of the stock on the shelf, getting down the pallets, and carrying the pallets to production area. Using MCNP-4C exposure situations have been simulated, considering that the source has a homogeneous composition, the minimum stock in the store house is constituted by 7 pallets, and the several distances between pallets and workers when they are at work. The photons flux obtained by MCNP-4C is multiplied by the conversion factor of Flux to Kerma for air by conversion factor to Effective Dose by Kerma unit, and by the number of emitted photons. Those conversion factors are obtained of ICRP 74 table 1 and table 17 respectively. This
Performance of the MTR core with MOX fuel using the MCNP4C2 code.
Shaaban, Ismail; Albarhoum, Mohamad
2016-08-01
The MCNP4C2 code was used to simulate the MTR-22 MW research reactor and perform the neutronic analysis for a new fuel namely: a MOX (U3O8&PuO2) fuel dispersed in an Al matrix for One Neutronic Trap (ONT) and Three Neutronic Traps (TNTs) in its core. Its new characteristics were compared to its original characteristics based on the U3O8-Al fuel. Experimental data for the neutronic parameters including criticality relative to the MTR-22 MW reactor for the original U3O8-Al fuel at nominal power were used to validate the calculated values and were found acceptable. The achieved results seem to confirm that the use of MOX fuel in the MTR-22 MW will not degrade the safe operational conditions of the reactor. In addition, the use of MOX fuel in the MTR-22 MW core leads to reduce the uranium fuel enrichment with (235)U and the amount of loaded (235)U in the core by about 34.84% and 15.21% for the ONT and TNTs cases, respectively. PMID:27213809
NASA Astrophysics Data System (ADS)
Zamani, M.; Kasesaz, Y.; Khalafi, H.; Pooya, S. M. Hosseini
Boron Neutron Capture Therapy (BNCT) is used for treatment of many diseases, including brain tumors, in many medical centers. In this method, a target area (e.g., head of patient) is irradiated by some optimized and suitable neutron fields such as research nuclear reactors. Aiming at protection of healthy tissues which are located in the vicinity of irradiated tissue, and based on the ALARA principle, it is required to prevent unnecessary exposure of these vital organs. In this study, by using numerical simulation method (MCNP4C Code), the absorbed dose in target tissue and the equiavalent dose in different sensitive tissues of a patiant treated by BNCT, are calculated. For this purpose, we have used the parameters of MIRD Standard Phantom. Equiavelent dose in 11 sensitive organs, located in the vicinity of target, and total equivalent dose in whole body, have been calculated. The results show that the absorbed dose in tumor and normal tissue of brain equal to 30.35 Gy and 0.19 Gy, respectively. Also, total equivalent dose in 11 sensitive organs, other than tumor and normal tissue of brain, is equal to 14 mGy. The maximum equivalent doses in organs, other than brain and tumor, appear to the tissues of lungs and thyroid and are equal to 7.35 mSv and 3.00 mSv, respectively.
Dawahra, S; Khattab, K; Saba, G
2015-10-01
A comparative study for fuel conversion from the HEU to LEU in the Miniature Neutron Source Reactor (MNSR) has been performed in this paper using the MCNP4C code. The neutron energy and lethargy flux spectra in the first inner and outer irradiation sites of the MNSR reactor for the existing HEU fuel (UAl4-Al, 90% enriched) and the potential LEU fuels (U3Si2-Al, U3Si-Al, U9Mo-Al, 19.75% enriched and UO2, 12.6% enriched) were investigated using the MCNP4C code. The neutron energy flux spectra for each group was calculated by dividing the neutron flux by the width of each energy group. The neutron flux spectra per unit lethargy was calculated by multiplying the neutron energy flux spectra for each energy group by the average energy of each group. The thermal neutron flux was calculated by summing the neutron fluxes from 0.0 to 0.625 eV, the fast neutron flux was calculated by summing the neutron fluxes from 0.5 MeV to 10 MeV for the existing HEU and potential LEU fuels. Good agreements have been noticed between the flux spectra for the potential LEU fuels and the existing HEU fuels with maximum relative differences less than 10% and 8% in the inner and outer irradiation sites. PMID:26142805
Shell model Monte Carlo methods
Koonin, S.E.; Dean, D.J.
1996-10-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Monte Carlo and quasi-Monte Carlo methods
NASA Astrophysics Data System (ADS)
Caflisch, Russel E.
Monte Carlo is one of the most versatile and widely used numerical methods. Its convergence rate, O(N-1/2), is independent of dimension, which shows Monte Carlo to be very robust but also slow. This article presents an introduction to Monte Carlo methods for integration problems, including convergence theory, sampling methods and variance reduction techniques. Accelerated convergence for Monte Carlo quadrature is attained using quasi-random (also called low-discrepancy) sequences, which are a deterministic alternative to random or pseudo-random sequences. The points in a quasi-random sequence are correlated to provide greater uniformity. The resulting quadrature method, called quasi-Monte Carlo, has a convergence rate of approximately O((logN)kN-1). For quasi-Monte Carlo, both theoretical error estimates and practical limitations are presented. Although the emphasis in this article is on integration, Monte Carlo simulation of rarefied gas dynamics is also discussed. In the limit of small mean free path (that is, the fluid dynamic limit), Monte Carlo loses its effectiveness because the collisional distance is much less than the fluid dynamic length scale. Computational examples are presented throughout the text to illustrate the theory. A number of open problems are described.
Extending canonical Monte Carlo methods
NASA Astrophysics Data System (ADS)
Velazquez, L.; Curilef, S.
2010-02-01
In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model.
NASA Astrophysics Data System (ADS)
Pauzi, A. M.
2013-06-01
The neutron transport code, Monte Carlo N-Particle (MCNP) which was wellkown as the gold standard in predicting nuclear reaction was used to model the small nuclear reactor core called "U-batteryTM", which was develop by the University of Manchester and Delft Institute of Technology. The paper introduces on the concept of modeling the small reactor core, a high temperature reactor (HTR) type with small coated TRISO fuel particle in graphite matrix using the MCNPv4C software. The criticality of the core were calculated using the software and analysed by changing key parameters such coolant type, fuel type and enrichment levels, cladding materials, and control rod type. The criticality results from the simulation were validated using the SCALE 5.1 software by [1] M Ding and J L Kloosterman, 2010. The data produced from these analyses would be used as part of the process of proposing initial core layout and a provisional list of materials for newly design reactor core. In the future, the criticality study would be continued with different core configurations and geometries.
Monte Carlo N Particle code - Dose distribution of clinical electron beams in inhomogeneous phantoms
Nedaie, H. A.; Mosleh-Shirazi, M. A.; Allahverdi, M.
2013-01-01
Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162
Nedaie, H A; Mosleh-Shirazi, M A; Allahverdi, M
2013-01-01
Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162
Improved Monte Carlo Renormalization Group Method
DOE R&D Accomplishments Database
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
Bahreyni Toossi, Mohammad Taghi; Momennezhad, Mehdi; Hashemi, Seyed Mohammad
2012-01-01
Aim Exact knowledge of dosimetric parameters is an essential pre-requisite of an effective treatment in radiotherapy. In order to fulfill this consideration, different techniques have been used, one of which is Monte Carlo simulation. Materials and methods This study used the MCNP-4Cb to simulate electron beams from Neptun 10 PC medical linear accelerator. Output factors for 6, 8 and 10 MeV electrons applied to eleven different conventional fields were both measured and calculated. Results The measurements were carried out by a Wellhofler-Scanditronix dose scanning system. Our findings revealed that output factors acquired by MCNP-4C simulation and the corresponding values obtained by direct measurements are in a very good agreement. Conclusion In general, very good consistency of simulated and measured results is a good proof that the goal of this work has been accomplished. PMID:24377010
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
Monte Carlo methods in genetic analysis
Lin, Shili
1996-12-31
Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.
An enhanced Monte Carlo outlier detection method.
Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi
2015-09-30
Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc. PMID:26226927
Monte Carlo Methods in the Physical Sciences
Kalos, M H
2007-06-06
I will review the role that Monte Carlo methods play in the physical sciences. They are very widely used for a number of reasons: they permit the rapid and faithful transformation of a natural or model stochastic process into a computer code. They are powerful numerical methods for treating the many-dimensional problems that derive from important physical systems. Finally, many of the methods naturally permit the use of modern parallel computers in efficient ways. In the presentation, I will emphasize four aspects of the computations: whether or not the computation derives from a natural or model stochastic process; whether the system under study is highly idealized or realistic; whether the Monte Carlo methodology is straightforward or mathematically sophisticated; and finally, the scientific role of the computation.
Monte-Carlo Continuous Energy Burnup Code System.
Energy Science and Technology Software Center (ESTSC)
2007-08-31
Version 00 MCB is a Monte Carlo Continuous Energy Burnup Code for a general-purpose use to calculate a nuclide density time evolution with burnup or decay. It includes eigenvalue calculations of critical and subcritical systems as well as neutron transport calculations in fixed source mode or k-code mode to obtain reaction rates and energy deposition that are necessary for burnup calculations. The MCB-1C patch file and data packages as distributed by the NEADB are verymore » well organized and are being made available through RSICC as received. The RSICC package includes the MCB-1C patch and MCB data libraries. Installation of MCB requires MCNP4C source code and utility programs, which are not included in this MCB distribution. They were provided with the now obsolete CCC-700/MCNP-4C package.« less
Path Integral Monte Carlo Methods for Fermions
NASA Astrophysics Data System (ADS)
Ethan, Ethan; Dubois, Jonathan; Ceperley, David
2014-03-01
In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.
Monte Carlo methods to calculate impact probabilities
NASA Astrophysics Data System (ADS)
Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward
Calculating Pi Using the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Williamson, Timothy
2013-11-01
During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-01
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-01
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Discrete range clustering using Monte Carlo methods
NASA Technical Reports Server (NTRS)
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Monte Carlo methods in lattice gauge theories
Otto, S.W.
1983-01-01
The mass of the O/sup +/ glueball for SU(2) gauge theory in 4 dimensions is calculated. This computation was done on a prototype parallel processor and the implementation of gauge theories on this system is described in detail. Using an action of the purely Wilson form (tract of plaquette in the fundamental representation), results with high statistics are obtained. These results are not consistent with scaling according to the continuum renormalization group. Using actions containing higher representations of the group, a search is made for one which is closer to the continuum limit. The choice is based upon the phase structure of these extended theories and also upon the Migdal-Kadanoff approximation to the renormalizaiton group on the lattice. The mass of the O/sup +/ glueball for this improved action is obtained and the mass divided by the square root of the string tension is a constant as the lattice spacing is varied. The other topic studied is the inclusion of dynamical fermions into Monte Carlo calculations via the pseudo fermion technique. Monte Carlo results obtained with this method are compared with those from an exact algorithm based on Gauss-Seidel inversion. First applied were the methods to the Schwinger model and SU(3) theory.
Dosimetry of gamma chamber blood irradiator using PAGAT gel dosimeter and Monte Carlo simulations.
Mohammadyari, Parvin; Zehtabian, Mehdi; Sina, Sedigheh; Tavasoli, Ali Reza; Faghihi, Reza
2014-01-01
Currently, the use of blood irradiation for inactivating pathogenic microbes in infected blood products and preventing graft-versus-host disease (GVHD) in immune suppressed patients is greater than ever before. In these systems, dose distribution and uniformity are two important concepts that should be checked. In this study, dosimetry of the gamma chamber blood irradiator model Gammacell 3000 Elan was performed by several dosimeter methods including thermoluminescence dosimeters (TLD), PAGAT gel dosimetry, and Monte Carlo simulations using MCNP4C code. The gel dosimeter was put inside a glass phantom and the TL dosimeters were placed on its surface, and the phantom was then irradiated for 5 min and 27 sec. The dose values at each point inside the vials were obtained from the magnetic resonance imaging of the phantom. For Monte Carlo simulations, all components of the irradiator were simulated and the dose values in a fine cubical lattice were calculated using tally F6. This study shows that PAGAT gel dosimetry results are in close agreement with the results of TL dosimetry, Monte Carlo simulations, and the results given by the vendor, and the percentage difference between the different methods is less than 4% at different points inside the phantom. According to the results obtained in this study, PAGAT gel dosimetry is a reliable method for dosimetry of the blood irradiator. The major advantage of this kind of dosimetry is that it is capable of 3D dose calculation. PMID:24423829
Improved method for implicit Monte Carlo
Brown, F. B.; Martin, W. R.
2001-01-01
The Implicit Monte Carlo (IMC) method has been used for over 30 years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Reference [2] provided an exact error analysis of IMC for 0-D problems and demonstrated that IMC can exhibit substantial errors when timesteps are large. These temporal errors are inherent in the method and are in addition to spatial discretization errors and approximations that address nonlinearities (due to variation of physical constants). In Reference [3], IMC and four other methods were analyzed in detail and compared on both theoretical grounds and the accuracy of numerical tests. As discussed in, two alternative schemes for solving the radiative transfer equations, the Carter-Forest (C-F) method and the Ahrens-Larsen (A-L) method, do not exhibit the errors found in IMC; for 0-D, both of these methods are exact for all time, while for 3-D, A-L is exact for all time and C-F is exact within a timestep. These methods can yield substantially superior results to IMC.
Accelerated Monte Carlo Methods for Coulomb Collisions
NASA Astrophysics Data System (ADS)
Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce
2014-03-01
We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.
The Monte Carlo Method. Popular Lectures in Mathematics.
ERIC Educational Resources Information Center
Sobol', I. M.
The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…
Extending canonical Monte Carlo methods: II
NASA Astrophysics Data System (ADS)
Velazquez, L.; Curilef, S.
2010-04-01
We have previously presented a methodology for extending canonical Monte Carlo methods inspired by a suitable extension of the canonical fluctuation relation C = β2langδE2rang compatible with negative heat capacities, C < 0. Now, we improve this methodology by including the finite size effects that reduce the precision of a direct determination of the microcanonical caloric curve β(E) = ∂S(E)/∂E, as well as by carrying out a better implementation of the MC schemes. We show that, despite the modifications considered, the extended canonical MC methods lead to an impressive overcoming of the so-called supercritical slowing down observed close to the region of the temperature driven first-order phase transition. In this case, the size dependence of the decorrelation time τ is reduced from an exponential growth to a weak power-law behavior, \\tau (N)\\propto N^{\\alpha } , as is shown in the particular case of the 2D seven-state Potts model where the exponent α = 0.14-0.18.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
NASA Astrophysics Data System (ADS)
Cho, Sang Hyun; Reece, Warren D.; Kim, Chan-Hyeong
2004-03-01
Dose calculations around electron-emitting metallic spherical sources were performed up to the X90 distance of each electron energy ranging from 0.5 to 3.0 MeV using the MCNP 4C Monte Carlo code and the dose point kernel (DPK) method with the DPKs rescaled using the linear range ratio and physical density ratio, respectively. The results show that the discrepancy between the MCNP and DPK results increases with the atomic number of the source (i.e., heterogeneity in source-target geometry), regardless of the rescaling method used. The observed discrepancies between the MCNP and DPK results were up to 100% for extreme cases such as a platinum source immersed in water.
Vectorized Monte Carlo methods for reactor lattice analysis
NASA Technical Reports Server (NTRS)
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT
W. R. MARTIN; F. B. BROWN
2001-03-01
Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Monte Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.
A Particle Population Control Method for Dynamic Monte Carlo
NASA Astrophysics Data System (ADS)
Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony
2014-06-01
A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems.
Gentile, N A
2000-10-01
We present a method for accelerating time dependent Monte Carlo radiative transfer calculations by using a discretization of the diffusion equation to calculate probabilities that are used to advance particles in regions with small mean free path. The method is demonstrated on problems with on 1 and 2 dimensional orthogonal grids. It results in decreases in run time of more than an order of magnitude on these problems, while producing answers with accuracy comparable to pure IMC simulations. We call the method Implicit Monte Carlo Diffusion, which we abbreviate IMD.
An assessment of the MCNP4C weight window
Christopher N. Culbertson; John S. Hendricks
1999-12-01
A new, enhanced weight window generator suite has been developed for MCNP version 4C. The new generator correctly estimates importances in either a user-specified, geometry-independent, orthogonal grid or in MCNP geometric cells. The geometry-independent option alleviates the need to subdivide the MCNP cell geometry for variance reduction purposes. In addition, the new suite corrects several pathologies in the existing MCNP weight window generator. The new generator is applied in a set of five variance reduction problems. The improved generator is compared with the weight window generator applied in MCNP4B. The benefits of the new methodology are highlighted, along with a description of its limitations. The authors also provide recommendations for utilization of the weight window generator.
Monte Carlo methods and applications in nuclear physics
Carlson, J.
1990-01-01
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.
Application of biasing techniques to the contributon Monte Carlo method
Dubi, A.; Gerstl, S.A.W.
1980-01-01
Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables.
Study of the Transition Flow Regime using Monte Carlo Methods
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
A Monte Carlo method for combined segregation and linkage analysis.
Guo, S W; Thompson, E A
1992-01-01
We introduce a Monte Carlo approach to combined segregation and linkage analysis of a quantitative trait observed in an extended pedigree. In conjunction with the Monte Carlo method of likelihood-ratio evaluation proposed by Thompson and Guo, the method provides for estimation and hypothesis testing. The greatest attraction of this approach is its ability to handle complex genetic models and large pedigrees. Two examples illustrate the practicality of the method. One is of simulated data on a large pedigree; the other is a reanalysis of published data previously analyzed by other methods. PMID:1415253
Observations on variational and projector Monte Carlo methods
Umrigar, C. J.
2015-10-28
Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.
Frequency domain optical tomography using a Monte Carlo perturbation method
NASA Astrophysics Data System (ADS)
Yamamoto, Toshihiro; Sakamoto, Hiroki
2016-04-01
A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.
Monte Carlo Form-Finding Method for Tensegrity Structures
NASA Astrophysics Data System (ADS)
Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping
2010-05-01
In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.
Multiple-time-stepping generalized hybrid Monte Carlo methods
NASA Astrophysics Data System (ADS)
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
The alias method: A fast, efficient Monte Carlo sampling technique
Rathkopf, J.A.; Edwards, A.L. ); Smidt, R.K. )
1990-11-16
The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 2 figs., 1 tab.
Extending the alias Monte Carlo sampling method to general distributions
Edwards, A.L.; Rathkopf, J.A. ); Smidt, R.K. )
1991-01-07
The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs.
Monte Carlo methods: Application to hydrogen gas and hard spheres
NASA Astrophysics Data System (ADS)
Dewing, Mark Douglas
2001-08-01
Quantum Monte Carlo (QMC) methods are among the most accurate for computing ground state properties of quantum systems. The two major types of QMC we use are Variational Monte Carlo (VMC), which evaluates integrals arising from the variational principle, and Diffusion Monte Carlo (DMC), which stochastically projects to the ground state from a trial wave function. These methods are applied to a system of boson hard spheres to get exact, infinite system size results for the ground state at several densities. The kinds of problems that can be simulated with Monte Carlo methods are expanded through the development of new algorithms for combining a QMC simulation with a classical Monte Carlo simulation, which we call Coupled Electronic-Ionic Monte Carlo (CEIMC). The new CEIMC method is applied to a system of molecular hydrogen at temperatures ranging from 2800K to 4500K and densities from 0.25 to 0.46 g/cm3. VMC requires optimizing a parameterized wave function to find the minimum energy. We examine several techniques for optimizing VMC wave functions, focusing on the ability to optimize parameters appearing in the Slater determinant. Classical Monte Carlo simulations use an empirical interatomic potential to compute equilibrium properties of various states of matter. The CEIMC method replaces the empirical potential with a QMC calculation of the electronic energy. This is similar in spirit to the Car-Parrinello technique, which uses Density Functional Theory for the electrons and molecular dynamics for the nuclei. The challenges in constructing an efficient CEIMC simulation center mostly around the noisy results generated from the QMC computations of the electronic energy. We introduce two complementary techniques, one for tolerating the noise and the other for reducing it. The penalty method modifies the Metropolis acceptance ratio to tolerate noise without introducing a bias in the simulation of the nuclei. For reducing the noise, we introduce the two-sided energy
Bayesian Monte Carlo Method for Nuclear Data Evaluation
Koning, A.J.
2015-01-15
A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.
Mehranian, A.; Ay, M. R.; Alam, N. Riyahi; Zaidi, H.
2010-02-15
Purpose: The accurate prediction of x-ray spectra under typical conditions encountered in clinical x-ray examination procedures and the assessment of factors influencing them has been a long-standing goal of the diagnostic radiology and medical physics communities. In this work, the influence of anode surface roughness on diagnostic x-ray spectra is evaluated using MCNP4C-based Monte Carlo simulations. Methods: An image-based modeling method was used to create realistic models from surface-cracked anodes. An in-house computer program was written to model the geometric pattern of cracks and irregularities from digital images of focal track surface in order to define the modeled anodes into MCNP input file. To consider average roughness and mean crack depth into the models, the surface of anodes was characterized by scanning electron microscopy and surface profilometry. It was found that the average roughness (R{sub a}) in the most aged tube studied is about 50 {mu}m. The correctness of MCNP4C in simulating diagnostic x-ray spectra was thoroughly verified by calling its Gaussian energy broadening card and comparing the simulated spectra with experimentally measured ones. The assessment of anode roughness involved the comparison of simulated spectra in deteriorated anodes with those simulated in perfectly plain anodes considered as reference. From these comparisons, the variations in output intensity, half value layer (HVL), heel effect, and patient dose were studied. Results: An intensity loss of 4.5% and 16.8% was predicted for anodes aged by 5 and 50 {mu}m deep cracks (50 kVp, 6 deg. target angle, and 2.5 mm Al total filtration). The variations in HVL were not significant as the spectra were not hardened by more than 2.5%; however, the trend for this variation was to increase with roughness. By deploying several point detector tallies along the anode-cathode direction and averaging exposure over them, it was found that for a 6 deg. anode, roughened by 50 {mu}m deep
A separable shadow Hamiltonian hybrid Monte Carlo method
NASA Astrophysics Data System (ADS)
Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A.
2009-11-01
Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).
A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification
NASA Astrophysics Data System (ADS)
Wu, Keyi; Li, Jinglai
2016-09-01
In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.
Bayesian Monte Carlo method for nuclear data evaluation
NASA Astrophysics Data System (ADS)
Koning, A. J.
2015-12-01
A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.
A new method for commissioning Monte Carlo treatment planning systems
NASA Astrophysics Data System (ADS)
Aljarrah, Khaled Mohammed
2005-11-01
The Monte Carlo method is an accurate method for solving numerical problems in different fields. It has been used for accurate radiation dose calculation for radiation treatment of cancer. However, the modeling of an individual radiation beam produced by a medical linear accelerator for Monte Carlo dose calculation, i.e., the commissioning of a Monte Carlo treatment planning system, has been the bottleneck for the clinical implementation of Monte Carlo treatment planning. In this study a new method has been developed to determine the parameters of the initial electron beam incident on the target for a clinical linear accelerator. The interaction of the initial electron beam with the accelerator target produces x-ray and secondary charge particles. After successive interactions in the linac head components, the x-ray photons and the secondary charge particles interact with the patient's anatomy and deliver dose to the region of interest. The determination of the initial electron beam parameters is important for estimating the delivered dose to the patients. These parameters, such as beam energy and radial intensity distribution, are usually estimated through a trial and error process. In this work an easy and efficient method was developed to determine these parameters. This was accomplished by comparing calculated 3D dose distributions for a grid of assumed beam energies and radii in a water phantom with measurements data. Different cost functions were studied to choose the appropriate function for the data comparison. The beam parameters were determined on the light of this method. Due to the assumption that same type of linacs are exactly the same in their geometries and only differ by the initial phase space parameters, the results of this method were considered as a source data to commission other machines of the same type.
Cluster Monte Carlo methods for the FePt Hamiltonian
NASA Astrophysics Data System (ADS)
Lyberatos, A.; Parker, G. J.
2016-02-01
Cluster Monte Carlo methods for the classical spin Hamiltonian of FePt with long range exchange interactions are presented. We use a combination of the Swendsen-Wang (or Wolff) and Metropolis algorithms that satisfies the detailed balance condition and ergodicity. The algorithms are tested by calculating the temperature dependence of the magnetization, susceptibility and heat capacity of L10-FePt nanoparticles in a range including the critical region. The cluster models yield numerical results in good agreement within statistical error with the standard single-spin flipping Monte Carlo method. The variation of the spin autocorrelation time with grain size is used to deduce the dynamic exponent of the algorithms. Our cluster models do not provide a more accurate estimate of the magnetic properties at equilibrium.
Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems
NASA Astrophysics Data System (ADS)
Slattery, Stuart R.
This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It
Calculations of pair production by Monte Carlo methods
Bottcher, C.; Strayer, M.R.
1991-01-01
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.
MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS
R. ESTEP; ET AL
2000-06-01
Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.
A new lattice Monte Carlo method for simulating dielectric inhomogeneity
NASA Astrophysics Data System (ADS)
Duan, Xiaozheng; Wang, Zhen-Gang; Nakamura, Issei
We present a new lattice Monte Carlo method for simulating systems involving dielectric contrast between different species by modifying an algorithm originally proposed by Maggs et al. The original algorithm is known to generate attractive interactions between particles that have different dielectric constant than the solvent. Here we show that such attractive force is spurious, arising from incorrectly biased statistical weight caused by the particle motion during the Monte Carlo moves. We propose a new, simple algorithm to resolve this erroneous sampling. We demonstrate the application of our algorithm by simulating an uncharged polymer in a solvent with different dielectric constant. Further, we show that the electrostatic fields in ionic crystals obtained from our simulations with a relatively small simulation box correspond well with results from the analytical solution. Thus, our Monte Carlo method avoids the need for the Ewald summation in conventional simulation methods for charged systems. This work was supported by the National Natural Science Foundation of China (21474112 and 21404103). We are grateful to Computing Center of Jilin Province for essential support.
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-01-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-07-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )
1991-01-01
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.
On Monte Carlo Methods and Applications in Geoscience
NASA Astrophysics Data System (ADS)
Zhang, Z.; Blais, J.
2009-05-01
Monte Carlo methods are designed to study various deterministic problems using probabilistic approaches, and with computer simulations to explore much wider possibilities for the different algorithms. Pseudo- Random Number Generators (PRNGs) are based on linear congruences of some large prime numbers, while Quasi-Random Number Generators (QRNGs) provide low discrepancy sequences, both of which giving uniformly distributed numbers in (0,1). Chaotic Random Number Generators (CRNGs) give sequences of 'random numbers' satisfying some prescribed probabilistic density, often denser around the two corners of interval (0,1), but transforming this type of density to a uniform one is usually possible. Markov Chain Monte Carlo (MCMC), as indicated by its name, is associated with Markov Chain simulations. Basic descriptions of these random number generators will be given, and a comparative analysis of these four methods will be included based on their efficiencies and other characteristics. Some applications in geoscience using Monte Carlo simulations will be described, and a comparison of these algorithms will also be included with some concluding remarks.
A multi-scale Monte Carlo method for electrolytes
NASA Astrophysics Data System (ADS)
Liang, Yihao; Xu, Zhenli; Xing, Xiangjun
2015-08-01
Artifacts arise in the simulations of electrolytes using periodic boundary conditions (PBCs). We show the origin of these artifacts are the periodic image charges and the constraint of charge neutrality inside the simulation box, both of which are unphysical from the view point of real systems. To cure these problems, we introduce a multi-scale Monte Carlo (MC) method, where ions inside a spherical cavity are simulated explicitly, while ions outside are treated implicitly using a continuum theory. Using the method of Debye charging, we explicitly derive the effective interactions between ions inside the cavity, arising due to the fluctuations of ions outside. We find that these effective interactions consist of two types: (1) a constant cavity potential due to the asymmetry of the electrolyte, and (2) a reaction potential that depends on the positions of all ions inside. Combining the grand canonical Monte Carlo (GCMC) with a recently developed fast algorithm based on image charge method, we perform a multi-scale MC simulation of symmetric electrolytes, and compare it with other simulation methods, including PBC + GCMC method, as well as large scale MC simulation. We demonstrate that our multi-scale MC method is capable of capturing the correct physics of a large system using a small scale simulation.
Stabilized multilevel Monte Carlo method for stiff stochastic differential equations
NASA Astrophysics Data System (ADS)
Abdulle, Assyr; Blumenthal, Adrian
2013-10-01
A multilevel Monte Carlo (MLMC) method for mean square stable stochastic differential equations with multiple scales is proposed. For such problems, that we call stiff, the performance of MLMC methods based on classical explicit methods deteriorates because of the time step restriction to resolve the fastest scales that prevents to exploit all the levels of the MLMC approach. We show that by switching to explicit stabilized stochastic methods and balancing the stabilization procedure simultaneously with the hierarchical sampling strategy of MLMC methods, the computational cost for stiff systems is significantly reduced, while keeping the computational algorithm fully explicit and easy to implement. Numerical experiments on linear and nonlinear stochastic differential equations and on a stochastic partial differential equation illustrate the performance of the stabilized MLMC method and corroborate our theoretical findings.
Improved criticality convergence via a modified Monte Carlo iteration method
Booth, Thomas E; Gubernatis, James E
2009-01-01
Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.
A simple eigenfunction convergence acceleration method for Monte Carlo
Booth, Thomas E
2010-11-18
Monte Carlo transport codes typically use a power iteration method to obtain the fundamental eigenfunction. The standard convergence rate for the power iteration method is the ratio of the first two eigenvalues, that is, k{sub 2}/k{sub 1}. Modifications to the power method have accelerated the convergence by explicitly calculating the subdominant eigenfunctions as well as the fundamental. Calculating the subdominant eigenfunctions requires using particles of negative and positive weights and appropriately canceling the negative and positive weight particles. Incorporating both negative weights and a {+-} weight cancellation requires a significant change to current transport codes. This paper presents an alternative convergence acceleration method that does not require modifying the transport codes to deal with the problems associated with tracking and cancelling particles of {+-} weights. Instead, only positive weights are used in the acceleration method.
Direct simulation Monte Carlo method with a focal mechanism algorithm
NASA Astrophysics Data System (ADS)
Rachman, Asep Nur; Chung, Tae Woong; Yoshimoto, Kazuo; Yun, Sukyoung
2015-01-01
To simulate the observation of the radiation pattern of an earthquake, the direct simulation Monte Carlo (DSMC) method is modified by implanting a focal mechanism algorithm. We compare the results of the modified DSMC method (DSMC-2) with those of the original DSMC method (DSMC-1). DSMC-2 shows more or similarly reliable results compared to those of DSMC-1, for events with 12 or more recorded stations, by weighting twice for hypocentral distance of less than 80 km. Not only the number of stations, but also other factors such as rough topography, magnitude of event, and the analysis method influence the reliability of DSMC-2. The most reliable result by DSMC-2 is obtained by the best azimuthal coverage by the largest number of stations. The DSMC-2 method requires shorter time steps and a larger number of particles than those of DSMC-1 to capture a sufficient number of arrived particles in the small-sized receiver.
Daures, J; Gouriou, J; Bordy, J M
2011-03-01
This work has been performed within the frame of the European Union ORAMED project (Optimisation of RAdiation protection for MEDical staff). The main goal of the project is to improve standards of protection for medical staff for procedures resulting in potentially high exposures and to develop methodologies for better assessing and for reducing, exposures to medical staff. The Work Package WP2 is involved in the development of practical eye-lens dosimetry in interventional radiology. This study is complementary of the part of the ENEA report concerning the calculations with the MCNP-4C code of the conversion factors related to the operational quantity H(p)(3). In this study, a set of energy- and angular-dependent conversion coefficients (H(p)(3)/K(a)), in the newly proposed square cylindrical phantom made of ICRU tissue, have been calculated with the Monte-Carlo code PENELOPE and MCNP5. The H(p)(3) values have been determined in terms of absorbed dose, according to the definition of this quantity, and also with the kerma approximation as formerly reported in ICRU reports. At a low-photon energy (up to 1 MeV), the two results obtained with the two methods are consistent. Nevertheless, large differences are showed at a higher energy. This is mainly due to the lack of electronic equilibrium, especially for small angle incidences. The values of the conversion coefficients obtained with the MCNP-4C code published by ENEA quite agree with the kerma approximation calculations obtained with PENELOPE. We also performed the same calculations with the code MCNP5 with two types of tallies: F6 for kerma approximation and *F8 for estimating the absorbed dose that is, as known, due to secondary electrons. PENELOPE and MCNP5 results agree for the kerma approximation and for the absorbed dose calculation of H(p)(3) and prove that, for photon energies larger than 1 MeV, the transport of the secondary electrons has to be taken into account. PMID:21242167
Condensed history Monte Carlo methods for photon transport problems
Bhan, Katherine; Spanier, Jerome
2007-01-01
We study methods for accelerating Monte Carlo simulations that retain most of the accuracy of conventional Monte Carlo algorithms. These methods – called Condensed History (CH) methods – have been very successfully used to model the transport of ionizing radiation in turbid systems. Our primary objective is to determine whether or not such methods might apply equally well to the transport of photons in biological tissue. In an attempt to unify the derivations, we invoke results obtained first by Lewis, Goudsmit and Saunderson and later improved by Larsen and Tolar. We outline how two of the most promising of the CH models – one based on satisfying certain similarity relations and the second making use of a scattering phase function that permits only discrete directional changes – can be developed using these approaches. The main idea is to exploit the connection between the space-angle moments of the radiance and the angular moments of the scattering phase function. We compare the results obtained when the two CH models studied are used to simulate an idealized tissue transport problem. The numerical results support our findings based on the theoretical derivations and suggest that CH models should play a useful role in modeling light-tissue interactions. PMID:18548128
Mammography X-Ray Spectra Simulated with Monte Carlo
Vega-Carrillo, H. R.; Gonzalez, J. Ramirez; Manzanares-Acuna, E.; Hernandez-Davila, V. M.; Villasana, R. Hernandez; Mercado, G. A.
2008-08-11
Monte Carlo calculations have been carried out to obtain the x-ray spectra of various target-filter combinations for a mammography unit. Mammography is widely used to diagnose breast cancer. Further to Mo target with Mo filter combination, Rh/Rh, Mo/Rh, Mo/Al, Rh/Al, and W/Rh are also utilized. In this work Monte Carlo calculations, using MCNP 4C code, were carried out to estimate the x-ray spectra produced when a beam of 28 keV electrons did collide with Mo, Rh and W targets. Resulting x-ray spectra show characteristic x-rays and continuous bremsstrahlung. Spectra were also calculated including filters.
Analysis of real-time networks with monte carlo methods
NASA Astrophysics Data System (ADS)
Mauclair, C.; Durrieu, G.
2013-12-01
Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.
Constrained Path Quantum Monte Carlo Method for Fermion Ground States
NASA Astrophysics Data System (ADS)
Zhang, Shiwei; Carlson, J.; Gubernatis, J. E.
1995-05-01
We propose a new quantum Monte Carlo algorithm to compute fermion ground-state properties. The ground state is projected from an initial wave function by a branching random walk in an over-complete basis space of Slater determinants. By constraining the determinants according to a trial wave function \\|ΨT>, we remove the exponential decay of signal-to-noise ratio characteristic of the sign problem. The method is variational and is exact if \\|ΨT> is exact. We report results on the two-dimensional Hubbard model up to size 16×16, for various electron fillings and interaction strengths.
Application of Monte Carlo methods in tomotherapy and radiation biophysics
NASA Astrophysics Data System (ADS)
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published
Zeinali-Rafsanjani, B.; Mosleh-Shirazi, M. A.; Faghihi, R.; Karbasi, S.; Mosalaei, A.
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
Zeinali-Rafsanjani, B; Mosleh-Shirazi, M A; Faghihi, R; Karbasi, S; Mosalaei, A
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
The macro response Monte Carlo method for electron transport
Svatos, M M
1998-09-01
The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could be tested. Most
The macro response Monte Carlo method for electron transport
NASA Astrophysics Data System (ADS)
Svatos, Michelle Marie
1998-10-01
This thesis proves the feasibility of basing depth dose calculations for electron radiotherapy on first- principles single scatter physics, in an amount of time that is comparable to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that have potential to be much faster than conventional electron transport methods such as condensed history. This is possible because MRMC is a Local-to- Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or 'kugel'. A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV-8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry, which in this case is a CT (computed tomography) scan of a patient or phantom. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code against EGS4 and MCNP for depth dose in simple phantoms having density inhomogeneities. The energy deposition algorithms for spreading dose across 5-10 zones per kugel were tested. Most resulting depth dose calculations were within 2-3% of well-benchmarked codes, with one excursion to 4%. This thesis shows that the concept of using single scatter-based physics in clinical radiation
Methods for variance reduction in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.
POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2013-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262
Parallel Performance Optimization of the Direct Simulation Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gao, Da; Zhang, Chonglin; Schwartzentruber, Thomas
2009-11-01
Although the direct simulation Monte Carlo (DSMC) particle method is more computationally intensive compared to continuum methods, it is accurate for conditions ranging from continuum to free-molecular, accurate in highly non-equilibrium flow regions, and holds potential for incorporating advanced molecular-based models for gas-phase and gas-surface interactions. As available computer resources continue their rapid growth, the DSMC method is continually being applied to increasingly complex flow problems. Although processor clock speed continues to increase, a trend of increasing multi-core-per-node parallel architectures is emerging. To effectively utilize such current and future parallel computing systems, a combined shared/distributed memory parallel implementation (using both Open Multi-Processing (OpenMP) and Message Passing Interface (MPI)) of the DSMC method is under development. The parallel implementation of a new state-of-the-art 3D DSMC code employing an embedded 3-level Cartesian mesh will be outlined. The presentation will focus on performance optimization strategies for DSMC, which includes, but is not limited to, modified algorithm designs, practical code-tuning techniques, and parallel performance optimization. Specifically, key issues important to the DSMC shared memory (OpenMP) parallel performance are identified as (1) granularity (2) load balancing (3) locality and (4) synchronization. Challenges and solutions associated with these issues as they pertain to the DSMC method will be discussed.
A modified Monte Carlo 'local importance function transform' method
Keady, K. P.; Larsen, E. W.
2013-07-01
The Local Importance Function Transform (LIFT) method uses an approximation of the contribution transport problem to bias a forward Monte-Carlo (MC) source-detector simulation [1-3]. Local (cell-based) biasing parameters are calculated from an inexpensive deterministic adjoint solution and used to modify the physics of the forward transport simulation. In this research, we have developed a new expression for the LIFT biasing parameter, which depends on a cell-average adjoint current to scalar flux (J{sup *}/{phi}{sup *}) ratio. This biasing parameter differs significantly from the original expression, which uses adjoint cell-edge scalar fluxes to construct a finite difference estimate of the flux derivative; the resulting biasing parameters exhibit spikes in magnitude at material discontinuities, causing the original LIFT method to lose efficiency in problems with high spatial heterogeneity. The new J{sup *}/{phi}{sup *} expression, while more expensive to obtain, generates biasing parameters that vary smoothly across the spatial domain. The result is an improvement in simulation efficiency. A representative test problem has been developed and analyzed to demonstrate the advantage of the updated biasing parameter expression with regards to solution figure of merit (FOM). For reference, the two variants of the LIFT method are compared to a similar variance reduction method developed by Depinay [4, 5], as well as MC with deterministic adjoint weight windows (WW). (authors)
Simulation of ultracold plasmas using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Vrinceanu, D.; Balaraman, G. S.
2010-03-01
After creation of the ultracold plasma, the system is far from equilibrium. The electrons equilibrate among themselves and achieve local-thermal equilibrium on a time scale of few nano-seconds. The ions on the other hand expand radially due to the thermal pressure exerted by the electrons, on a much slower time scale (microseconds). Molecular dynamics simulation can be used to study the expansion and equilibration of ultracold plasmas, however a full micro second simulation are computationally exorbitant. We propose a novel method using Monte Carlo method for simulating long timescale dynamics of a spherically symmetric ultracold plasma cloud [1]. Results from our method for the expansion of ion plasma size, and electron density distributions show good agreement with the molecular dynamics simulations. Our results for the collisionless plasma are in good agreement with the Vlasov equation. Our method is very computationally efficient, and takes a few minutes on a desktop to simulate tens of nanoseconds of dynamics of millions of particles. [4pt] [1] D. Vrinceanu, G. S. Balaraman and L. Collins, ``The King model for electrons in a finite-size ultracold plasma,'' J. Phys. A, 41 425501 (2008)
Constrained path Monte Carlo method for fermion ground states
Zhang, S. |; Carlson, J.; Gubernatis, J.E.
1997-03-01
We describe and discuss a recently proposed quantum Monte Carlo algorithm to compute the ground-state properties of various systems of interacting fermions. In this method, the ground state is projected from an initial wave function by a branching random walk in an overcomplete basis of Slater determinants. By constraining the determinants according to a trial wave function {vert_bar}{psi}{sub T}{r_angle}, we remove the exponential decay of signal-to-noise ratio characteristic of the sign problem. The method is variational and is exact if {vert_bar}{psi}{sub T}{r_angle} is exact. We illustrate the method by describing in detail its implementation for the two-dimensional one-band Hubbard model. We show results for lattice sizes up to 16{times}16 and for various electron fillings and interaction strengths. With simple single-determinant wave functions as {vert_bar}{psi}{sub T}{r_angle}, the method yields accurate (often to within a few percent) estimates of the ground-state energy as well as correlation functions, such as those for electron pairing. We conclude by discussing possible extensions of the algorithm. {copyright} {ital 1997} {ital The American Physical Society}
Constrained path Monte Carlo method for fermion ground states
NASA Astrophysics Data System (ADS)
Zhang, Shiwei; Carlson, J.; Gubernatis, J. E.
1997-03-01
We describe and discuss a recently proposed quantum Monte Carlo algorithm to compute the ground-state properties of various systems of interacting fermions. In this method, the ground state is projected from an initial wave function by a branching random walk in an overcomplete basis of Slater determinants. By constraining the determinants according to a trial wave function \\|ψT>, we remove the exponential decay of signal-to-noise ratio characteristic of the sign problem. The method is variational and is exact if \\|ψT> is exact. We illustrate the method by describing in detail its implementation for the two-dimensional one-band Hubbard model. We show results for lattice sizes up to 16×16 and for various electron fillings and interaction strengths. With simple single-determinant wave functions as \\|ψT>, the method yields accurate (often to within a few percent) estimates of the ground-state energy as well as correlation functions, such as those for electron pairing. We conclude by discussing possible extensions of the algorithm.
Modeling granular phosphor screens by Monte Carlo methods
Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.
2006-12-15
The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd{sub 2}O{sub 2}S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd{sub 2}O{sub 2}S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd{sub 2}O{sub 2}S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)
Cool walking: a new Markov chain Monte Carlo sampling method.
Brown, Scott; Head-Gordon, Teresa
2003-01-15
Effective relaxation processes for difficult systems like proteins or spin glasses require special simulation techniques that permit barrier crossing to ensure ergodic sampling. Numerous adaptations of the venerable Metropolis Monte Carlo (MMC) algorithm have been proposed to improve its sampling efficiency, including various hybrid Monte Carlo (HMC) schemes, and methods designed specifically for overcoming quasi-ergodicity problems such as Jump Walking (J-Walking), Smart Walking (S-Walking), Smart Darting, and Parallel Tempering. We present an alternative to these approaches that we call Cool Walking, or C-Walking. In C-Walking two Markov chains are propagated in tandem, one at a high (ergodic) temperature and the other at a low temperature. Nonlocal trial moves for the low temperature walker are generated by first sampling from the high-temperature distribution, then performing a statistical quenching process on the sampled configuration to generate a C-Walking jump move. C-Walking needs only one high-temperature walker, satisfies detailed balance, and offers the important practical advantage that the high and low-temperature walkers can be run in tandem with minimal degradation of sampling due to the presence of correlations. To make the C-Walking approach more suitable to real problems we decrease the required number of cooling steps by attempting to jump at intermediate temperatures during cooling. We further reduce the number of cooling steps by utilizing "windows" of states when jumping, which improves acceptance ratios and lowers the average number of cooling steps. We present C-Walking results with comparisons to J-Walking, S-Walking, Smart Darting, and Parallel Tempering on a one-dimensional rugged potential energy surface in which the exact normalized probability distribution is known. C-Walking shows superior sampling as judged by two ergodic measures. PMID:12483676
Underwater Optical Wireless Channel Modeling Using Monte-Carlo Method
NASA Astrophysics Data System (ADS)
Saini, P. Sri; Prince, Shanthi
2011-10-01
At present, there is a lot of interest in the functioning of the marine environment. Unmanned or Autonomous Underwater Vehicles (UUVs or AUVs) are used in the exploration of the underwater resources, pollution monitoring, disaster prevention etc. Underwater, where radio waves do not propagate, acoustic communication is being used. But, underwater communication is moving towards Optical Communication which has higher bandwidth when compared to Acoustic Communication but has shorter range comparatively. Underwater Optical Wireless Communication (OWC) is mainly affected by the absorption and scattering of the optical signal. In coastal waters, both inherent and apparent optical properties (IOPs and AOPs) are influenced by a wide array of physical, biological and chemical processes leading to optical variability. The scattering effect has two effects: the attenuation of the signal and the Inter-Symbol Interference (ISI) of the signal. However, the Inter-Symbol Interference is ignored in the present paper. Therefore, in order to have an efficient underwater OWC link it is necessary to model the channel efficiently. In this paper, the underwater optical channel is modeled using Monte-Carlo method. The Monte Carlo approach provides the most general and most flexible technique for numerically solving the equations of Radiative transfer. The attenuation co-efficient of the light signal is studied as a function of the absorption (a) and scattering (b) coefficients. It has been observed that for pure sea water and for less chlorophyll conditions blue wavelength is less absorbed whereas for chlorophyll rich environment red wavelength signal is absorbed less comparative to blue and green wavelength.
Markov chain Monte Carlo methods: an introductory example
NASA Astrophysics Data System (ADS)
Klauenberg, Katy; Elster, Clemens
2016-02-01
When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.
Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination
Liu, B; Xu, J; Liu, T; Ouyang, X
2012-01-01
Objective To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Methods Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or γ-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a 252Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV D–D neutron generator can create neutrons at up to 1013 n s−1 with current technology. All these enable an effective and low-cost method of killing anthrax spores. Results There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. Conclusion The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g 252Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV D–D neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV D–D neutron generator output >1013 n s−1 should be attainable in the near future. This indicates that we could use a D–D neutron generator to sterilise anthrax contamination within several seconds. PMID:22573293
MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD
K. HANSON
2001-02-01
The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.
Evaluation of Hamaker coefficients using Diffusion Monte Carlo method
NASA Astrophysics Data System (ADS)
Maezono, Ryo; Hongo, Kenta
We evaluated the Hamaker's constant for Cyclohexasilane to investigate its wettability, which is used as an ink of 'liquid silicon' in 'printed electronics'. Taking three representative geometries of the dimer coalescence (parallel, lined, and T-shaped), we evaluated these binding curves using diffusion Monte Carlo method. The parallel geometry gave the most long-ranged exponent, ~ 1 /r6 , in its asymptotic behavior. Evaluated binding lengths are fairly consistent with the experimental density of the molecule. The fitting of the asymptotic curve gave an estimation of Hamaker's constant being around 100 [zJ]. We also performed a CCSD(T) evaluation and got almost similar result. To check its justification, we applied the same scheme to Benzene and compared the estimation with those by other established methods, Lifshitz theory and SAPT (Symmetry-adopted perturbation theory). The result by the fitting scheme turned to be twice larger than those by Lifshitz and SAPT, both of which coincide with each other. It is hence implied that the present evaluation for Cyclohexasilane would be overestimated.
SCALE Monte Carlo Eigenvalue Methods and New Advancements
Goluoglu, Sedat; Leppanen, Jaakko; Petrie Jr, Lester M; Dunn, Michael E
2010-01-01
SCALE code system is developed and maintained by Oak Ridge National Laboratory to perform criticality safety, reactor analysis, radiation shielding, and spent fuel characterization for nuclear facilities and transportation/storage package designs. SCALE is a modular code system that includes several codes which use either Monte Carlo or discrete ordinates solution methodologies for solving relevant neutral particle transport equations. This paper describes some of the key capabilities of the Monte Carlo criticality safety codes within the SCALE code system.
Spike Inference from Calcium Imaging Using Sequential Monte Carlo Methods
Vogelstein, Joshua T.; Watson, Brendon O.; Packer, Adam M.; Yuste, Rafael; Jedynak, Bruno; Paninski, Liam
2009-01-01
Abstract As recent advances in calcium sensing technologies facilitate simultaneously imaging action potentials in neuronal populations, complementary analytical tools must also be developed to maximize the utility of this experimental paradigm. Although the observations here are fluorescence movies, the signals of interest—spike trains and/or time varying intracellular calcium concentrations—are hidden. Inferring these hidden signals is often problematic due to noise, nonlinearities, slow imaging rate, and unknown biophysical parameters. We overcome these difficulties by developing sequential Monte Carlo methods (particle filters) based on biophysical models of spiking, calcium dynamics, and fluorescence. We show that even in simple cases, the particle filters outperform the optimal linear (i.e., Wiener) filter, both by obtaining better estimates and by providing error bars. We then relax a number of our model assumptions to incorporate nonlinear saturation of the fluorescence signal, as well external stimulus and spike history dependence (e.g., refractoriness) of the spike trains. Using both simulations and in vitro fluorescence observations, we demonstrate temporal superresolution by inferring when within a frame each spike occurs. Furthermore, the model parameters may be estimated using expectation maximization with only a very limited amount of data (e.g., ∼5–10 s or 5–40 spikes), without the requirement of any simultaneous electrophysiology or imaging experiments. PMID:19619479
Medical Imaging Image Quality Assessment with Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.
Treatment planning aspects and Monte Carlo methods in proton therapy
NASA Astrophysics Data System (ADS)
Fix, Michael K.; Manser, Peter
2015-05-01
Over the last years, the interest in proton radiotherapy is rapidly increasing. Protons provide superior physical properties compared with conventional radiotherapy using photons. These properties result in depth dose curves with a large dose peak at the end of the proton track and the finite proton range allows sparing the distally located healthy tissue. These properties offer an increased flexibility in proton radiotherapy, but also increase the demand in accurate dose estimations. To carry out accurate dose calculations, first an accurate and detailed characterization of the physical proton beam exiting the treatment head is necessary for both currently available delivery techniques: scattered and scanned proton beams. Since Monte Carlo (MC) methods follow the particle track simulating the interactions from first principles, this technique is perfectly suited to accurately model the treatment head. Nevertheless, careful validation of these MC models is necessary. While for the dose estimation pencil beam algorithms provide the advantage of fast computations, they are limited in accuracy. In contrast, MC dose calculation algorithms overcome these limitations and due to recent improvements in efficiency, these algorithms are expected to improve the accuracy of the calculated dose distributions and to be introduced in clinical routine in the near future.
Generalized directed loop method for quantum Monte Carlo simulations.
Alet, Fabien; Wessel, Stefan; Troyer, Matthias
2005-03-01
Efficient quantum Monte Carlo update schemes called directed loops have recently been proposed, which improve the efficiency of simulations of quantum lattice models. We propose to generalize the detailed balance equations at the local level during the loop construction by accounting for the matrix elements of the operators associated with open world-line segments. Using linear programming techniques to solve the generalized equations, we look for optimal construction schemes for directed loops. This also allows for an extension of the directed loop scheme to general lattice models, such as high-spin or bosonic models. The resulting algorithms are bounce free in larger regions of parameter space than the original directed loop algorithm. The generalized directed loop method is applied to the magnetization process of spin chains in order to compare its efficiency to that of previous directed loop schemes. In contrast to general expectations, we find that minimizing bounces alone does not always lead to more efficient algorithms in terms of autocorrelations of physical observables, because of the nonuniqueness of the bounce-free solutions. We therefore propose different general strategies to further minimize autocorrelations, which can be used as supplementary requirements in any directed loop scheme. We show by calculating autocorrelation times for different observables that such strategies indeed lead to improved efficiency; however, we find that the optimal strategy depends not only on the model parameters but also on the observable of interest. PMID:15903632
Quantum Monte Carlo methods and lithium cluster properties
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Latent uncertainties of the precalculated track Monte Carlo method
Renaud, Marc-André; Seuntjens, Jan; Roberge, David
2015-01-15
Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of
Backward and Forward Monte Carlo Method in Polarized Radiative Transfer
NASA Astrophysics Data System (ADS)
Yong, Huang; Guo-Dong, Shi; Ke-Yong, Zhu
2016-03-01
In general, the Stocks vector cannot be calculated in reverse in the vector radiative transfer. This paper presents a novel backward and forward Monte Carlo simulation strategy to study the vector radiative transfer in the participated medium. A backward Monte Carlo process is used to calculate the ray trajectory and the endpoint of the ray. The Stocks vector is carried out by a forward Monte Carlo process. A one-dimensional graded index semi-transparent medium was presented as the physical model and the thermal emission consideration of polarization was studied in the medium. The solution process to non-scattering, isotropic scattering, and the anisotropic scattering medium, respectively, is discussed. The influence of the optical thickness and albedo on the Stocks vector are studied. The results show that the U, V-components of the apparent Stocks vector are very small, but the Q-component of the apparent Stocks vector is relatively larger, which cannot be ignored.
Approximation of probability density functions by the Multilevel Monte Carlo Maximum Entropy method
NASA Astrophysics Data System (ADS)
Bierig, Claudio; Chernov, Alexey
2016-06-01
We develop a complete convergence theory for the Maximum Entropy method based on moment matching for a sequence of approximate statistical moments estimated by the Multilevel Monte Carlo method. Under appropriate regularity assumptions on the target probability density function, the proposed method is superior to the Maximum Entropy method with moments estimated by the Monte Carlo method. New theoretical results are illustrated in numerical examples.
Fernandes, Ana C.; Goncalves, Isabel C.; Barradas, Nuno P.; Ramalho, Antonio J.
2003-09-15
The Monte Carlo code MCNP-4C was used to calculate the effective multiplication coefficient of a core configuration of the Portuguese Research Reactor (RPI) and the neutron fluxes in the core and in the reflector region.A comparison of the results obtained with MCNP and with the deterministic codes WIMSD-5 and CITATION was made. Consistent deviations of 2% for the effective multiplication constant and 8 to 28% for the neutron flux, depending on the energy range, were observed.Thermal, epithermal, and fast neutron flux measurements were performed using activation detectors. The calculations agree with experimental values within <15%; therefore, the Monte Carlo results can be used to predict the neutron field in other locations and irradiation facilities of the RPI.
NASA Astrophysics Data System (ADS)
Montaño, L. M.; Sanchez, D.; Avila, C.; Sanabria, J.; Baldazzi, G.; Bollini, D. D.; Cabal, A.; Ceballos, C.; Dabrowski, W.; Díaz García, A.; Gambaccini, M.; Giubellino, P.; Gombia, M.; Grybos, P.; Idzik, M.; Marzari-Chiesa, A.; Prino, F.; Ramello, L.; Sitta, M.; Swientek, K.; Taibi, A.; Tomassi, E.; Tuffanelli, A.; Wiacek, P.
2004-09-01
Preliminary results of a dual energy angiography simulation using the Monte Carlo package GEANT 3.2113 are presented and compared to Monte Carlo MCNP-4C results reported before. The simulation is based on an experimental set up consisting of a Plexiglas-aluminium step wedge phantom with 4 cylindrical cavities filled with iodated contrast medium. The silicon 384 microstrip detector was set into edge-on configuration (incoming X-rays parallel to longitudinal axis of the strips) and the properties of the simulated detector just resemble the ones of the real detector. Monochromatic photon beams of 31.5keV and 35.5keV are used to take advantage of the discontinuous variation of the iodine photon absorption at the energy of the K-shell, the key to dual energy subtraction imaging.
MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD
A predictive screening model was developed for fate and transport
of viruses in the unsaturated zone. A database of input parameters
allowed Monte Carlo analysis with the model. The resulting kernel
densities of predicted attenuation during percolation indicated very ...
Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming
2014-12-29
The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium. PMID:25607163
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Franke, B. C.; Prinja, A. K.
2013-07-01
The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)
NASA Astrophysics Data System (ADS)
Gallagher, Kerry; Sambridge, Malcolm; Drijkoningen, Guy
In providing a method for solving non-linear optimization problems Monte Carlo techniques avoid the need for linearization but, in practice, are often prohibitive because of the large number of models that must be considered. A new class of methods known as Genetic Algorithms have recently been devised in the field of Artificial Intelligence. We outline the basic concept of genetic algorithms and discuss three examples. We show that, in locating an optimal model, the new technique is far superior in performance to Monte Carlo techniques in all cases considered. However, Monte Carlo integration is still regarded as an effective method for the subsequent model appraisal.
Efficient, Automated Monte Carlo Methods for Radiation Transport
Kong, Rong; Ambrose, Martin; Spanier, Jerome
2012-01-01
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872
An analysis method for evaluating gradient-index fibers based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Yoshida, S.; Horiuchi, S.; Ushiyama, Z.; Yamamoto, M.
2011-05-01
We propose a numerical analysis method for evaluating gradient-index (GRIN) optical fiber using the Monte Carlo method. GRIN optical fibers are widely used in optical information processing and communication applications, such as an image scanner, fax machine, optical sensor, and so on. An important factor which decides the performance of GRIN optical fiber is modulation transfer function (MTF). The MTF of a fiber is swayed by condition of manufacturing process such as temperature. Actual measurements of the MTF of a GRIN optical fiber using this method closely match those made by conventional methods. Experimentally, the MTF is measured using a square wave chart, and is then calculated based on the distribution of output strength on the chart. In contrast, the general method using computers evaluates the MTF based on a spot diagram made by an incident point light source. But the results differ greatly from those by experiment. In this paper, we explain the manufacturing process which affects the performance of GRIN optical fibers and a new evaluation method similar to the experimental system based on the Monte Carlo method. We verified that it more closely matches the experimental results than the conventional method.
Hariri, Sanaz; Shahriari, Majid
2010-01-01
Due to intensive use of multileaf collimators (MLCs) in clinics, finding an optimum design for the leaves becomes essential. There are several studies which deal with comparison of MLC systems, but there is no article with a focus on offering an optimum design using accurate methods like Monte Carlo. In this study, we describe some characteristics of MLC systems including the leaf tip transmission, beam hardening, leakage radiation and penumbra width for Varian and Elekta 80-leaf MLCs using MCNP4C code. The complex geometry of leaves in these two common MLC systems was simulated. It was assumed that all of the MLC systems were mounted on a Varian accelerator and with a similar thickness as Varian's and the same distance from the source. Considering the obtained results from Varian and Elekta leaf designs, an optimum design was suggested combining the advantages of three common MLC systems and the simulation results of this proposed one were compared with the Varian and the Elekta. The leakage from suggested design is 29.7% and 31.5% of the Varian and Elekta MLCs. In addition, other calculated parameters of the proposed MLC leaf design were better than those two commercial ones. Although it shows a wider penumbra in comparison with Varian and Elekta MLCs, taking into account the curved motion path of the leaves, providing a double focusing design will solve the problem. The suggested leaf design is a combination of advantages from three common vendors (Varian, Elekta and Siemens) which can show better results than each one. Using the results of this theoretical study may bring about superior practical outcomes. PMID:20717079
Hybrid Monte Carlo-Deterministic Methods for Nuclear Reactor-Related Criticality Calculations
Edward W. Larson
2004-02-17
The overall goal of this project is to develop, implement, and test new Hybrid Monte Carlo-deterministic (or simply Hybrid) methods for the more efficient and more accurate calculation of nuclear engineering criticality problems. These new methods will make use of two (philosophically and practically) very different techniques - the Monte Carlo technique, and the deterministic technique - which have been developed completely independently during the past 50 years. The concept of this proposal is to merge these two approaches and develop fundamentally new computational techniques that enhance the strengths of the individual Monte Carlo and deterministic approaches, while minimizing their weaknesses.
NASA Technical Reports Server (NTRS)
Firstenberg, H.
1971-01-01
The statistics are considered of the Monte Carlo method relative to the interpretation of the NUGAM2 and NUGAM3 computer code results. A numerical experiment using the NUGAM2 code is presented and the results are statistically interpreted.
Methods for coupling radiation, ion, and electron energies in grey Implicit Monte Carlo
Evans, T.M. . E-mail: evanstm@ornl.gov; Densmore, J.D. . E-mail: jdd@lanl.gov
2007-08-10
We present three methods for extending the Implicit Monte Carlo (IMC) method to treat the time-evolution of coupled radiation, electron, and ion energies. The first method splits the ion and electron coupling and conduction from the standard IMC radiation-transport process. The second method recasts the IMC equations such that part of the coupling is treated during the Monte Carlo calculation. The third method treats all of the coupling and conduction in the Monte Carlo simulation. We apply modified equation analysis (MEA) to simplified forms of each method that neglects the errors in the conduction terms. Through MEA we show that the third method is theoretically the most accurate. We demonstrate the effectiveness of each method on a series of 0-dimensional, nonlinear benchmark problems where the accuracy of the third method is shown to be up to ten times greater than the other coupling methods for selected calculations.
Growing lattice animals and Monte-Carlo methods
NASA Astrophysics Data System (ADS)
Reich, G. R.; Leath, P. L.
1980-01-01
We consider the search problems which arise in Monte-Carlo studies involving growing lattice animals. A new periodic hashing scheme (based on a periodic cell) especially suited to these problems is presented which takes advantage both of the connected geometric structure of the animals and the traversal-oriented nature of the search. The scheme is motivated by a physical analogy and tested numerically on compact and on ramified animals. In both cases the performance is found to be more efficient than random hashing, and to a degree depending on the compactness of the animals
Analysis of Monte Carlo methods applied to blackbody and lower emissivity cavities.
Pahl, Robert J; Shannon, Mark A
2002-02-01
Monte Carlo methods are often applied to the calculation of the apparent emissivities of blackbody cavities. However, for cavities with complex as well as some commonly encountered geometries, the emission Monte Carlo method experiences problems of convergence. The emission and absorption Monte Carlo methods are compared on the basis of ease of implementation and convergence speed when applied to blackbody sources. A new method to determine solution convergence compatible with both methods is developed, and the convergence speeds of the two methods are compared through the application of both methods to a right-circular cylinder cavity. It is shown that the absorption method converges faster and is easier to implement than the emission method when applied to most blackbody and lower emissivity cavities. PMID:11993915
Application of Monte Carlo Methods in Molecular Targeted Radionuclide Therapy
Hartmann Siantar, C; Descalle, M-A; DeNardo, G L; Nigg, D W
2002-02-19
Targeted radionuclide therapy promises to expand the role of radiation beyond the treatment of localized tumors. This novel form of therapy targets metastatic cancers by combining radioactive isotopes with tumor-seeking molecules such as monoclonal antibodies and custom-designed synthetic agents. Ultimately, like conventional radiotherapy, the effectiveness of targeted radionuclide therapy is limited by the maximum dose that can be given to a critical, normal tissue, such as bone marrow, kidneys, and lungs. Because radionuclide therapy relies on biological delivery of radiation, its optimization and characterization are necessarily different than for conventional radiation therapy. We have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA treatment planning system. This system calculates patient-specific radiation dose estimates using a set of computed tomography scans to describe the 3D patient anatomy, combined with 2D (planar image) and 3D (SPECT, or single photon emission computed tomography) to describe the time-dependent radiation source. The accuracy of such a dose calculation is limited primarily by the accuracy of the initial radiation source distribution, overlaid on the patient's anatomy. This presentation provides an overview of MINERVA functionality for molecular targeted radiation therapy, and describes early validation and implementation results of Monte Carlo simulations.
Dynamical Monte Carlo methods for plasma-surface reactions
NASA Astrophysics Data System (ADS)
Guerra, Vasco; Marinov, Daniil
2016-08-01
Different dynamical Monte Carlo algorithms to investigate molecule formation on surfaces are developed, evaluated and compared with the deterministic approach based on reaction-rate equations. These include a null event algorithm, the n-fold way/BKL algorithm and an ‘hybrid’ variant of the latter. NO2 formation by NO oxidation on Pyrex and O recombination on silica with the formation of O2 are taken as case studies. The influence of the grid size on the CPU calculation time and the accuracy of the results is analysed. The role of Langmuir–Hinsehlwood recombination involving two physisorbed atoms and the effect of back diffusion and its inclusion in a deterministic formulation are investigated and discussed. It is shown that dynamical Monte Carlo schemes are flexible, simple to implement, describe easily elementary processes that are not straightforward to include in deterministic simulations, can run very efficiently if appropriately chosen and give highly reliable results. Moreover, the present approach provides a relatively simple procedure to describe fully coupled surface and gas phase chemistries.
APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula
Hwang, M.; Bae, S.; Chung, B. D.
2012-07-01
An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)
The use of the inverse Monte Carlo method in nuclear engineering
Dunn, W.L.
1988-01-01
The inverse Monte Carlo (IMC) method was introduced in 1981 in an attempt to apply Monte Carlo to the solution of inverse problems. It was argued that if direct Monte Carlo could be used to estimate expected values, which in the continuous case assume the form of definite integrals, then perhaps a variant could be used to solve inverse problems of the type that are posed as integral equations. The IMC method actually converts the inverse problem, through a noniterative simulation technique, into a system of algebraic equations that can be solved by standard analytical or numerical techniques. The principal merits of IMC are that, like direct Monte Carlo, the method can be applied to complex and multivariable problems, and variance reduction procedures can be applied.
NASA Astrophysics Data System (ADS)
Lodwick, Camille J.
This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up
Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement
Siswantoro, Joko; Idrus, Bahari
2014-01-01
Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method. PMID:24892069
Bold Diagrammatic Monte Carlo Method Applied to Fermionized Frustrated Spins
NASA Astrophysics Data System (ADS)
Kulagin, S. A.; Prokof'ev, N.; Starykh, O. A.; Svistunov, B.; Varney, C. N.
2013-02-01
We demonstrate, by considering the triangular lattice spin-1/2 Heisenberg model, that Monte Carlo sampling of skeleton Feynman diagrams within the fermionization framework offers a universal first-principles tool for strongly correlated lattice quantum systems. We observe the fermionic sign blessing—cancellation of higher order diagrams leading to a finite convergence radius of the series. We calculate the magnetic susceptibility of the triangular-lattice quantum antiferromagnet in the correlated paramagnet regime and reveal a surprisingly accurate microscopic correspondence with its classical counterpart at all accessible temperatures. The extrapolation of the observed relation to zero temperature suggests the absence of the magnetic order in the ground state. We critically examine the implications of this unusual scenario.
Monte Carlo Collision method for low temperature plasma simulation
NASA Astrophysics Data System (ADS)
Taccogna, Francesco
2015-01-01
This work shows the basic foundation of the particle-based representation of low temperature plasma description. In particular, the Monte Carlo Collision (MCC) recipe has been described for the case of electron-atom and ion-atom collisions. The model has been applied to the problem of plasma plume expansion from an electric Hall-effect type thruster. The presence of low energy secondary electrons from electron-atom ionization on the electron energy distribution function (EEDF) have been identified in the first 3 mm from the exit plane where, due to the azimuthal heating the ionization continues to play an important role. In addition, low energy charge-exchange ions from ion-atom electron transfer collisions are evident in the ion energy distribution functions (IEDF) 1 m from the exit plane.
Advanced computational methods for nodal diffusion, Monte Carlo, and S[sub N] problems
Martin, W.R.
1993-01-01
This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. A alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.
Advanced computational methods for nodal diffusion, Monte Carlo, and S(sub N) problems
NASA Astrophysics Data System (ADS)
Martin, W. R.
1993-01-01
This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. An alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.
Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.
2012-08-15
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations in optically thick media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many smaller Monte Carlo steps, thus improving the efficiency of the simulation. In this paper, we present an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold, as optical thickness is typically a decreasing function of frequency. Above this threshold we employ standard Monte Carlo, which results in a hybrid transport-diffusion scheme. With a set of frequency-dependent test problems, we confirm the accuracy and increased efficiency of our new DDMC method.
A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT
Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Loeffler, Frank; Schnetter, Erik
2012-08-20
Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.
Application of the Theory of Functional Monte Carlo Algorithms to Optimization of the DSMC Method
NASA Astrophysics Data System (ADS)
Plotnikov, M. Yu.; Shkarupa, E. V.
2008-12-01
Some approaches to error analysis and optimization of the Direct Simulation Monte Carlo method are presented. The main idea of this work is the construction of the relations between the sample size and the number of cells which guarantee the attainment of the given error level on the base of the theory of functional Monte Carlo algorithms. The optimal (in the sense of the obtained upper error bound) values of the sample size and the number of cells are constructed.
Time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B
2009-01-01
We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I
2014-06-15
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the
NASA Astrophysics Data System (ADS)
Jacqmin, Dustin J.
Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990's for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of pre-computed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient. The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and
A Monte Carlo Synthetic-Acceleration Method for Solving the Thermal Radiation Diffusion Equation
Evans, Thomas M; Mosher, Scott W; Slattery, Stuart
2014-01-01
We present a novel synthetic-acceleration based Monte Carlo method for solving the equilibrium thermal radiation diusion equation in three dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that not only can our Monte Carlo method be an eective solver for sparse matrix systems, but also that it performs competitively with deterministic methods including preconditioned Conjugate Gradient while producing numerically identical results. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.
A New Method for the Calculation of Diffusion Coefficients with Monte Carlo
NASA Astrophysics Data System (ADS)
Dorval, Eric
2014-06-01
This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods.
A Monte Carlo synthetic-acceleration method for solving the thermal radiation diffusion equation
Evans, Thomas M.; Mosher, Scott W.; Slattery, Stuart R.; Hamilton, Steven P.
2014-02-01
We present a novel synthetic-acceleration-based Monte Carlo method for solving the equilibrium thermal radiation diffusion equation in three spatial dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that our Monte Carlo method is an effective solver for sparse matrix systems. For solutions converged to the same tolerance, it performs competitively with deterministic methods including preconditioned conjugate gradient and GMRES. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.
Hybrid Monte-Carlo method for simulating neutron and photon radiography
NASA Astrophysics Data System (ADS)
Wang, Han; Tang, Vincent
2013-11-01
We present a Hybrid Monte-Carlo method (HMCM) for simulating neutron and photon radiographs. HMCM utilizes the combination of a Monte-Carlo particle simulation for calculating incident film radiation and a statistical post-processing routine to simulate film noise. Since the method relies on MCNP for transport calculations, it is easily generalized to most non-destructive evaluation (NDE) simulations. We verify the method's accuracy through ASTM International's E592-99 publication, Standard Guide to Obtainable Equivalent Penetrameter Sensitivity for Radiography of Steel Plates [1]. Potential uses for the method include characterizing alternative radiological sources and simulating NDE radiographs.
Spin-orbit interactions in electronic structure quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Melton, Cody A.; Zhu, Minyi; Guo, Shi; Ambrosetti, Alberto; Pederiva, Francesco; Mitas, Lubos
2016-04-01
We develop generalization of the fixed-phase diffusion Monte Carlo method for Hamiltonians which explicitly depends on particle spins such as for spin-orbit interactions. The method is formulated in a zero-variance manner and is similar to the treatment of nonlocal operators in commonly used static-spin calculations. Tests on atomic and molecular systems show that it is very accurate, on par with the fixed-node method. This opens electronic structure quantum Monte Carlo methods to a vast research area of quantum phenomena in which spin-related interactions play an important role.
Chiavassa, S; Lemosquet, A; Aubineau-Lanièce, I; de Carlan, L; Clairand, I; Ferrer, L; Bardiès, M; Franck, D; Zankl, M
2005-01-01
This paper aims at comparing dosimetric assessments performed with three Monte Carlo codes: EGS4, MCNP4c2 and MCNPX2.5e, using a realistic voxel phantom, namely the Zubal phantom, in two configurations of exposure. The first one deals with an external irradiation corresponding to the example of a radiological accident. The results are obtained using the EGS4 and the MCNP4c2 codes and expressed in terms of the mean absorbed dose (in Gy per source particle) for brain, lungs, liver and spleen. The second one deals with an internal exposure corresponding to the treatment of a medullary thyroid cancer by 131I-labelled radiopharmaceutical. The results are obtained by EGS4 and MCNPX2.5e and compared in terms of S-values (expressed in mGy per kBq and per hour) for liver, kidney, whole body and thyroid. The results of these two studies are presented and differences between the codes are analysed and discussed. PMID:16604715
Monte Carlo method for the evaluation of symptom association.
Barriga-Rivera, A; Elena, M; Moya, M J; Lopez-Alonso, M
2014-08-01
Gastroesophageal monitoring is limited to 96 hours by the current technology. This work presents a computational model to investigate symptom association in gastroesophageal reflux disease with larger data samples proving important deficiencies of the current methodology that must be taking into account in clinical evaluation. A computational model based on Monte Carlo analysis was implemented to simulate patients with known statistical characteristics Thus, sets of 2000 10-day-long recordings were simulated and analyzed using the symptom index (SI), the symptom sensitivity index (SSI), and the symptom association probability (SAP). Afterwards, linear regression was applied to define the dependency of these indexes with the number of reflux, the number of symptoms, the duration of the monitoring, and the probability of association. All the indexes were biased estimators of symptom association and therefore they do not consider the effect of chance: when symptom and reflux were completely uncorrelated, the values of the indexes under study were greater than zero. On the other hand, longer recording reduced variability in the estimation of the SI and the SSI while increasing the value of the SAP. Furthermore, if the number of symptoms remains below one-tenth of the number of reflux episodes, it is not possible to achieve a positive value of the SSI. A limitation of this computational model is that it does not consider feeding and sleeping periods, differences between reflux episodes or causation. However, the conclusions are not affected by these limitations. These facts represent important limitations in symptom association analysis, and therefore, invasive treatments must not be considered based on the value of these indexes only until a new methodology provides a more reliable assessment. PMID:23082973
Kolbun, N.; Leveque, Ph.; Abboud, F.; Bol, A.; Vynckier, S.; Gallez, B.
2010-10-15
Purpose: The experimental determination of doses at proximal distances from radioactive sources is difficult because of the steepness of the dose gradient. The goal of this study was to determine the relative radial dose distribution for a low dose rate {sup 192}Ir wire source using electron paramagnetic resonance imaging (EPRI) and to compare the results to those obtained using Gafchromic EBT film dosimetry and Monte Carlo (MC) simulations. Methods: Lithium formate and ammonium formate were chosen as the EPR dosimetric materials and were used to form cylindrical phantoms. The dose distribution of the stable radiation-induced free radicals in the lithium formate and ammonium formate phantoms was assessed by EPRI. EBT films were also inserted inside in ammonium formate phantoms for comparison. MC simulation was performed using the MCNP4C2 software code. Results: The radical signal in irradiated ammonium formate is contained in a single narrow EPR line, with an EPR peak-to-peak linewidth narrower than that of lithium formate ({approx}0.64 and 1.4 mT, respectively). The spatial resolution of EPR images was enhanced by a factor of 2.3 using ammonium formate compared to lithium formate because its linewidth is about 0.75 mT narrower than that of lithium formate. The EPRI results were consistent to within 1% with those of Gafchromic EBT films and MC simulations at distances from 1.0 to 2.9 mm. The radial dose values obtained by EPRI were about 4% lower at distances from 2.9 to 4.0 mm than those determined by MC simulation and EBT film dosimetry. Conclusions: Ammonium formate is a suitable material under certain conditions for use in brachytherapy dosimetry using EPRI. In this study, the authors demonstrated that the EPRI technique allows the estimation of the relative radial dose distribution at short distances for a {sup 192}Ir wire source.
NASA Astrophysics Data System (ADS)
Lin, Lin; Zhang, Mei
2015-02-01
The scaling Monte Carlo method and Gaussian model are applied to simulate the transportation of light beam with arbitrary waist radius. Much of the time, Monte Carlo simulation is performed for pencil or cone beam where the initial status of the photon is identical. In practical application, incident light is always focused on the sample to form approximate Gauss distribution on the surface. With alteration of focus position in the sample, the initial status of the photon will not be identical any more. Using the hyperboloid method, the initial reflect angle and coordinates are generated statistically according to the size of Gaussian waist and focus depth. Scaling calculation is performed with baseline data from standard Monte Carlo simulation. The scaling method incorporated with the Gaussian model was tested, and proved effective over a range of scattering coefficients from 20% to 180% relative to the value used in baseline simulation. In most cases, percentage error was less than 10%. The increasing of focus depth will result in larger error of scaled radial reflectance in the region close to the optical axis. In addition to evaluating accuracy of scaling the Monte Carlo method, this study has given implications for inverse Monte Carlo with arbitrary parameters of optical system.
Comparison of the Monte Carlo adjoint-weighted and differential operator perturbation methods
Kiedrowski, Brian C; Brown, Forrest B
2010-01-01
Two perturbation theory methodologies are implemented for k-eigenvalue calculations in the continuous-energy Monte Carlo code, MCNP6. A comparison of the accuracy of these techniques, the differential operator and adjoint-weighted methods, is performed numerically and analytically. Typically, the adjoint-weighted method shows better performance over a larger range; however, there are exceptions.
The factorization method for Monte Carlo simulations of systems with a complex with
NASA Astrophysics Data System (ADS)
Ambjørn, J.; Anagnostopoulos, K. N.; Nishimura, J.; Verbaarschot, J. J. M.
2004-03-01
We propose a method for Monte Carlo simulations of systems with a complex action. The method has the advantages of being in principle applicable to any such system and provides a solution to the overlap problem. In some cases, like in the IKKT matrix model, a finite size scaling extrapolation can provide results for systems whose size would make it prohibitive to simulate directly.
Perfetti, Christopher M; Martin, William R; Rearden, Bradley T; Williams, Mark L
2012-01-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.
Energy-Driven Kinetic Monte Carlo Method and Its Application in Fullerene Coalescence.
Ding, Feng; Yakobson, Boris I
2014-09-01
Mimicking the conventional barrier-based kinetic Monte Carlo simulation, an energy-driven kinetic Monte Carlo (EDKMC) method was developed to study the structural transformation of carbon nanomaterials. The new method is many orders magnitude faster than standard molecular dynamics or Monte Marlo (MC) simulations and thus allows us to explore rare events within a reasonable computational time. As an example, the temperature dependence of fullerene coalescence was studied. The simulation, for the first time, revealed that short capped single-walled carbon nanotubes (SWNTs) appear as low-energy metastable structures during the structural evolution. PMID:26278237
Improved methods of handling massive tallies in reactor Monte Carlo Code RMC
She, D.; Wang, K.; Sun, J.; Qiu, Y.
2013-07-01
Monte Carlo simulations containing a large number of tallies generally suffer severe performance penalties due to a significant amount of run time spent in searching for and scoring individual tally bins. This paper describes the improved methods of handling large numbers of tallies, which have been implemented in the RMC Monte Carlo code. The calculation results demonstrate that the proposed methods can considerably improve the tally performance when massive tallies are treated. In the calculated case with 6 million of tally regions, only 10% of run time is increased in each active cycle against each inactive cycle. (authors)
Comparison of uncertainty in fatigue tests obtained by the Monte Carlo method in two softwares
NASA Astrophysics Data System (ADS)
Trevisan, Lisiane; Kapper Fabricio, Daniel Antonio; Reguly, Afonso
2016-07-01
The Supplement 1 to the “Guide to the expression of uncertainty in measurement” indicates the Monte Carlo method for calculating the expanded measurement uncertainty. The objective of this work is to compare the measurement uncertainty values obtained via Monte Carlo method through two commercial softwares (Matlab® and Crystal Ball®) for the parameter ‘adjusted strain’, obtained from fatigue tests. Simulations were carried out using different number of iterations and different levels of confidence. The results showed that there are short differences between the measurement uncertainty values generated by different software.
Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
McKinley, M S; Brooks III, E D; Daffin, F
2004-12-13
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.
Revised methods for few-group cross sections generation in the Serpent Monte Carlo code
Fridman, E.; Leppaenen, J.
2012-07-01
This paper presents new calculation methods, recently implemented in the Serpent Monte Carlo code, and related to the production of homogenized few-group constants for deterministic 3D core analysis. The new methods fall under three topics: 1) Improved treatment of neutron-multiplying scattering reactions, 2) Group constant generation in reflectors and other non-fissile regions and 3) Homogenization in leakage-corrected criticality spectrum. The methodology is demonstrated by a numerical example, comparing a deterministic nodal diffusion calculation using Serpent-generated cross sections to a reference full-core Monte Carlo simulation. It is concluded that the new methodology improves the results of the deterministic calculation, and paves the way for Monte Carlo based group constant generation. (authors)
Quantum-trajectory Monte Carlo method for study of electron-crystal interaction in STEM.
Ruan, Z; Zeng, R G; Ming, Y; Zhang, M; Da, B; Mao, S F; Ding, Z J
2015-07-21
In this paper, a novel quantum-trajectory Monte Carlo simulation method is developed to study electron beam interaction with a crystalline solid for application to electron microscopy and spectroscopy. The method combines the Bohmian quantum trajectory method, which treats electron elastic scattering and diffraction in a crystal, with a Monte Carlo sampling of electron inelastic scattering events along quantum trajectory paths. We study in this work the electron scattering and secondary electron generation process in crystals for a focused incident electron beam, leading to understanding of the imaging mechanism behind the atomic resolution secondary electron image that has been recently achieved in experiment with a scanning transmission electron microscope. According to this method, the Bohmian quantum trajectories have been calculated at first through a wave function obtained via a numerical solution of the time-dependent Schrödinger equation with a multislice method. The impact parameter-dependent inner-shell excitation cross section then enables the Monte Carlo sampling of ionization events produced by incident electron trajectories travelling along atom columns for excitation of high energy knock-on secondary electrons. Following cascade production, transportation and emission processes of true secondary electrons of very low energies are traced by a conventional Monte Carlo simulation method to present image signals. Comparison of the simulated image for a Si(110) crystal with the experimental image indicates that the dominant mechanism of atomic resolution of secondary electron image is the inner-shell ionization events generated by a high-energy electron beam. PMID:26082190
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
Markov chain Mote Carlo solution of BK equation through Newton-Kantorovich method
NASA Astrophysics Data System (ADS)
BoŻek, Krzysztof; Kutak, Krzysztof; Placzek, Wieslaw
2013-07-01
We propose a new method for Monte Carlo solution of non-linear integral equations by combining the Newton-Kantorovich method for solving non-linear equations with the Markov Chain Monte Carlo (MCMC) method for solving linear equations. The Newton-Kantorovich method allows to express the non-linear equation as a system of the linear equations which then can be treated by the MCMC (random walk) algorithm. We apply this method to the Balitsky-Kovchegov (BK) equation describing evolution of gluon density at low x. Results of numerical computations show that the MCMC method is both precise and efficient. The presented algorithm may be particularly suited for solving more complicated and higher-dimensional non-linear integral equation, for which traditional methods become unfeasible.
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
ERIC Educational Resources Information Center
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…
Neutron streaming through shield ducts using a discrete ordinates/Monte Carlo method
Urban, W.T.; Baker, R.S.
1993-08-18
A common problem in shield design is determining the neutron flux that streams through ducts in shields and also that penetrates the shield after having traveled partway down the duct. Obviously the determination of the neutrons that stream down the duct can be computed in a straightforward manner using Monte Carlo techniques. On the other hand those neutrons that must penetrate a significant portion of the shield are more easily handled using discrete ordinates methods. A hybrid discrete ordinates/Monte Carlo cods, TWODANT/MC, which is an extension of the existing discrete ordinates code TWODANT, has been developed at Los Alamos to allow the efficient, accurate treatment of both streaming and deep penetration problems in a single calculation. In this paper we provide examples of the application of TWODANT/MC to typical geometries that are encountered in shield design and compare the results with those obtained using the Los Alamos Monte Carlo code MCNP{sup 3}.
A Monte Carlo method for 3D thermal infrared radiative transfer
NASA Astrophysics Data System (ADS)
Chen, Y.; Liou, K. N.
2006-09-01
A 3D Monte Carlo model for specific application to the broadband thermal radiative transfer has been developed in which the emissivities for gases and cloud particles are parameterized by using a single cubic element as the building block in 3D space. For spectral integration in the thermal infrared, the correlated k-distribution method has been used for the sorting of gaseous absorption lines in multiple-scattering atmospheres involving 3D clouds. To check the Monte-Carlo simulation, we compare a variety of 1D broadband atmospheric fluxes and heating rates to those computed from the conventional plane-parallel (PP) model and demonstrate excellent agreement between the two. Comparisons of the Monte Carlo results for broadband thermal cooling rates in 3D clouds to those computed from the delta-diffusion approximation for 3D radiative transfer and the independent pixel-by-pixel approximation are subsequently carried out to understand the relative merits of these approaches.
An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho
2001-01-01
Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…
ERIC Educational Resources Information Center
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water
ERIC Educational Resources Information Center
Gergely, John Robert
2009-01-01
Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"…
Novel approach to modeling spectral-domain optical coherence tomography with Monte Carlo method
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Trojanowski, Michal; Strakowski, Marcin; Pluciński, Jerzy; Kosmowski, Bogdan B.
2014-05-01
Numerical modeling Optical Coherence Tomography (OCT) systems is needed for optical setup optimization, development of new signal processing methods and assessment of impact of different physical phenomena inside the sample on OCT signal. The Monte Carlo method has been often used for modeling Optical Coherence Tomography, as it is a well established tool for simulating light propagation in scattering media. However, in this method light is modeled as a set of energy packets traveling along straight lines. This reduces accuracy of Monte Carlo calculations in case of simulating propagation of dopeds. Since such beams are commonly used in OCT systems, classical Monte Carlo algorithm need to be modified. In presented research, we have developed model of SD-OCT systems using combination of Monte Carlo and analytical methods. Our model includes properties of optical setup of OCT system, which is often omitted in other research. We present applied algorithms and comparison of simulation results with SD-OCT scans of optical phantoms. We have found that our model can be used for determination of level of OCT signal coming from scattering particles inside turbid media placed in different positions relatively to focal point of incident light beam. It may improve accuracy of simulating OCT systems.
Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method
NASA Astrophysics Data System (ADS)
Gilbreth, C. N.; Alhassid, Y.
2015-03-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
Monte-Carlo methods make Dempster-Shafer formalism feasible
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.; Bernat, Andrew; Borrett, Walter; Mariscal, Yvonne; Villa, Elsa
1991-01-01
One of the main obstacles to the applications of Dempster-Shafer formalism is its computational complexity. If we combine m different pieces of knowledge, then in general case we have to perform up to 2(sup m) computational steps, which for large m is infeasible. For several important cases algorithms with smaller running time were proposed. We prove, however, that if we want to compute the belief bel(Q) in any given query Q, then exponential time is inevitable. It is still inevitable, if we want to compute bel(Q) with given precision epsilon. This restriction corresponds to the natural idea that since initial masses are known only approximately, there is no sense in trying to compute bel(Q) precisely. A further idea is that there is always some doubt in the whole knowledge, so there is always a probability p(sub o) that the expert's knowledge is wrong. In view of that it is sufficient to have an algorithm that gives a correct answer a probability greater than 1-p(sub o). If we use the original Dempster's combination rule, this possibility diminishes the running time, but still leaves the problem infeasible in the general case. We show that for the alternative combination rules proposed by Smets and Yager feasible methods exist. We also show how these methods can be parallelized, and what parallelization model fits this problem best.
A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations
Densmore, Jeffery D. . E-mail: jdd@lanl.gov; Urbatsch, Todd J. . E-mail: tmonster@lanl.gov; Evans, Thomas M. . E-mail: tme@lanl.gov; Buksas, Michael W. . E-mail: mwbuksas@lanl.gov
2007-03-20
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the
NASA Astrophysics Data System (ADS)
Kim, Minho; Lee, Hyounggun; Kim, Hyosim; Park, Hongmin; Lee, Wonho; Park, Sungho
2014-03-01
This study evaluated the Monte Carlo method for determining the dose calculation in fluoroscopy by using a realistic human phantom. The dose was calculated by using Monte Carlo N-particle extended (MCNPX) in simulations and was measured by using Korean Typical Man-2 (KTMAN-2) phantom in the experiments. MCNPX is a widely-used simulation tool based on the Monte-Carlo method and uses random sampling. KTMAN-2 is a virtual phantom written in MCNPX language and is based on the typical Korean man. This study was divided into two parts: simulations and experiments. In the former, the spectrum generation program (SRS-78) was used to obtain the output energy spectrum for fluoroscopy; then, each dose to the target organ was calculated using KTMAN-2 with MCNPX. In the latter part, the output of the fluoroscope was calibrated first and TLDs (Thermoluminescent dosimeter) were inserted in the ART (Alderson Radiation Therapy) phantom at the same places as in the simulation. Thus, the phantom was exposed to radiation, and the simulated and the experimental doses were compared. In order to change the simulation unit to the dose unit, we set the normalization factor (NF) for unit conversion. Comparing the simulated with the experimental results, we found most of the values to be similar, which proved the effectiveness of the Monte Carlo method in fluoroscopic dose evaluation. The equipment used in this study included a TLD, a TLD reader, an ART phantom, an ionization chamber and a fluoroscope.
A new Monte Carlo power method for the eigenvalue problem of transfer matrices
Koma, Tohru )
1993-04-01
The author proposes a new Monte Carlo method for calculating eigenvalues of transfer matrices leading to free energies and to correlation lengths of classical and quantum many-body systems. Generally, this method can be applied to the calculation of the maximum eigenvalue of a nonnegative matrix A such that all the matrix elements of A[sup k] are strictly positive for an integer k. This method is based on a new representation of the maximum eigenvalue of the matrix A as the thermal average of a certain observable of a many-body system. Therefore one can easily calculate the maximum eigenvalue of a transfer matrix leading to the free energy in the standard Monte Carlo simulations, such as the Metropolis algorithm. As test cases, the author calculates the free energies of the square-lattice Ising model and of the spin-1/2 XY Heisenberg chain. He also proves two useful theorems on the ergodicity in quantum Monte Carlo algorithms, or more generally, on the ergodicity of Monte Carlo algorithms using the new representation of the maximum eigenvalue of the matrix A. 39 refs., 5 figs., 2 tabs.
A Hamiltonian Monte-Carlo method for Bayesian inference of supermassive black hole binaries
NASA Astrophysics Data System (ADS)
Porter, Edward K.; Carré, Jérôme
2014-07-01
We investigate the use of a Hamiltonian Monte-Carlo to map out the posterior density function for supermassive black hole binaries. While previous Markov Chain Monte-Carlo (MCMC) methods, such as Metropolis-Hastings MCMC, have been successfully employed for a number of different gravitational wave sources, these methods are essentially random walk algorithms. The Hamiltonian Monte-Carlo treats the inverse likelihood surface as a ‘gravitational potential’ and by introducing canonical positions and momenta, dynamically evolves the Markov chain by solving Hamilton's equations of motion. This method is not as widely used as other MCMC algorithms due to the necessity of calculating gradients of the log-likelihood, which for most applications results in a bottleneck that makes the algorithm computationally prohibitive. We circumvent this problem by using accepted initial phase-space trajectory points to analytically fit for each of the individual gradients. Eliminating the waveform generation needed for the numerical derivatives reduces the total number of required templates for a {{10}^{6}} iteration chain from \\sim {{10}^{9}} to \\sim {{10}^{6}}. The result is in an implementation of the Hamiltonian Monte-Carlo that is faster, and more efficient by a factor of approximately the dimension of the parameter space, than a Hessian MCMC.
Linear multistep methods, particle filtering and sequential Monte Carlo
NASA Astrophysics Data System (ADS)
Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki
2013-08-01
Numerical integration is the main bottleneck in particle filter methodologies for dynamic inverse problems to estimate model parameters, initial values, and non-observable components of an ordinary differential equation (ODE) system from partial, noisy observations, because proposals may result in stiff systems which first slow down or paralyze the time integration process, then end up being discarded. The immediate advantage of formulating the problem in a sequential manner is that the integration is carried out on shorter intervals, thus reducing the risk of long integration processes followed by rejections. We propose to solve the ODE systems within a particle filter framework with higher order numerical integrators which can handle stiffness and to base the choice of the variance of the innovation on estimates of the discretization errors. The application of linear multistep methods to particle filters gives a handle on the stability and accuracy of the propagation, and linking the innovation variance to the accuracy estimate helps keep the variance of the estimate as low as possible. The effectiveness of the methodology is demonstrated with a simple ODE system similar to those arising in biochemical applications.
Umezawa, Naoto; Tsuneyuki, Shinji
2004-10-15
We have implemented the excited electronic state calculations for a helium atom by the transcorrelated variational Monte Carlo (TC-VMC) method. In this method, Jastrow-Slater-type wave function is efficiently optimized not only for the Jastrow factor but also for the Slater determinant. Since the formalism for the TC-VMC method is based on the variance minimization, excited states as well as the ground state calculations are feasible. It is found that both the first and the second excitation energies given by TC-VMC are much closer to the experimental data than those given by the variational Monte Carlo method with using the Hartree-Fock orbitals. The successful results in the TC-VMC method are considered to be due to the nodal optimization of the wave functions. PMID:15473772
A high-order photon Monte Carlo method for radiative transfer in direct numerical simulation
Wu, Y.; Modest, M.F.; Haworth, D.C. . E-mail: dch12@psu.edu
2007-05-01
A high-order photon Monte Carlo method is developed to solve the radiative transfer equation. The statistical and discretization errors of the computed radiative heat flux and radiation source term are isolated and quantified. Up to sixth-order spatial accuracy is demonstrated for the radiative heat flux, and up to fourth-order accuracy for the radiation source term. This demonstrates the compatibility of the method with high-fidelity direct numerical simulation (DNS) for chemically reacting flows. The method is applied to address radiative heat transfer in a one-dimensional laminar premixed flame and a statistically one-dimensional turbulent premixed flame. Modifications of the flame structure with radiation are noted in both cases, and the effects of turbulence/radiation interactions on the local reaction zone structure are revealed for the turbulent flame. Computational issues in using a photon Monte Carlo method for DNS of turbulent reacting flows are discussed.
A Monte Carlo implementation of the predictor-corrector Quasi-Static method
Hackemack, M. W.; Ragusa, J. C.; Griesheimer, D. P.; Pounders, J. M.
2013-07-01
The Quasi-Static method (QS) is a useful tool for solving reactor transients since it allows for larger time steps when updating neutron distributions. Because of the beneficial attributes of Monte Carlo (MC) methods (exact geometries and continuous energy treatment), it is desirable to develop a MC implementation for the QS method. In this work, the latest version of the QS method known as the Predictor-Corrector Quasi-Static method is implemented. Experiments utilizing two energy-groups provide results that show good agreement with analytical and reference solutions. The method as presented can easily be implemented in any continuous energy, arbitrary geometry, MC code. (authors)
Searching therapeutic agents for treatment of Alzheimer disease using the Monte Carlo method.
Toropova, Mariya A; Toropov, Andrey A; Raška, Ivan; Rašková, Mária
2015-09-01
Quantitative structure - activity relationships (QSARs) for the pIC50 (binding affinity) of gamma-secretase inhibitors can be constructed with the Monte Carlo method using CORAL software (http://www.insilico.eu/coral). The considerable influence of the presence of rings of various types with respect to the above endpoint has been detected. The mechanistic interpretation and the domain of applicability of the QSARs are discussed. Methods to select new potential gamma-secretase inhibitors are suggested. PMID:26164035
NASA Astrophysics Data System (ADS)
Zhang, Xiaofeng
2012-03-01
Image formation in fluorescence diffuse optical tomography is critically dependent on construction of the Jacobian matrix. For clinical and preclinical applications, because of the highly heterogeneous characteristics of the medium, Monte Carlo methods are frequently adopted to construct the Jacobian. Conventional adjoint Monte Carlo method typically compute the Jacobian by multiplying the photon density fields radiated from the source at the excitation wavelength and from the detector at the emission wavelength. Nonetheless, this approach assumes that the source and the detector in Green's function are reciprocal, which is invalid in general. This assumption is particularly questionable in small animal imaging, where the mean free path length of photons is typically only one order of magnitude smaller than the representative dimension of the medium. We propose a new method that does not rely on the reciprocity of the source and the detector by tracing photon propagation entirely from the source to the detector. This method relies on the perturbation Monte Carlo theory to account for the differences in optical properties of the medium at the excitation and the emission wavelengths. Compared to the adjoint methods, the proposed method is more valid in reflecting the physical process of photon transport in diffusive media and is more efficient in constructing the Jacobian matrix for densely sampled configurations.
Frequency-domain Monte Carlo method for linear oscillatory gas flows
NASA Astrophysics Data System (ADS)
Ladiges, Daniel R.; Sader, John E.
2015-03-01
Gas flows generated by resonating nanoscale devices inherently occur in the non-continuum, low Mach number regime. Numerical simulations of such flows using the standard direct simulation Monte Carlo (DSMC) method are hindered by high statistical noise, which has motivated the development of several alternate Monte Carlo methods for low Mach number flows. Here, we present a frequency-domain low Mach number Monte Carlo method based on the Boltzmann-BGK equation, for the simulation of oscillatory gas flows. This circumvents the need for temporal simulations, as is currently required, and provides direct access to both amplitude and phase information using a pseudo-steady algorithm. The proposed method is validated for oscillatory Couette flow and the flow generated by an oscillating sphere. Good agreement is found with an existing time-domain method and accurate numerical solutions of the Boltzmann-BGK equation. Analysis of these simulations using a rigorous statistical approach shows that the frequency-domain method provides a significant improvement in computational speed.
Nguyen, Jennifer; Hayakawa, Carole K.; Mourant, Judith R.; Venugopalan, Vasan; Spanier, Jerome
2016-01-01
We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides. PMID:27231642
Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Venugopalan, Vasan; Spanier, Jerome
2016-05-01
We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides. PMID:27231642
Estimation of magnetocaloric properties by using Monte Carlo method for AMRR cycle
NASA Astrophysics Data System (ADS)
Arai, R.; Tamura, R.; Fukuda, H.; Li, J.; Saito, A. T.; Kaji, S.; Nakagome, H.; Numazawa, T.
2015-12-01
In order to achieve a wide refrigerating temperature range in magnetic refrigeration, it is effective to layer multiple materials with different Curie temperatures. It is crucial to have a detailed understanding of physical properties of materials to optimize the material selection and the layered structure. In the present study, we discuss methods for estimating a change in physical properties, particularly the Curie temperature when some of the Gd atoms are substituted for non-magnetic elements for material design, based on Gd as a ferromagnetic material which is a typical magnetocaloric material. For this purpose, whilst making calculations using the S=7/2 Ising model and the Monte Carlo method, we made a specific heat measurement and a magnetization measurement of Gd-R alloy (R = Y, Zr) to compare experimental values and calculated ones. The results showed that the magnetic entropy change, specific heat, and Curie temperature can be estimated with good accuracy using the Monte Carlo method.
Time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B
2008-01-01
Compton scattering is an important aspect of radiative transfer in high energy density applications. In this process, the frequency and direction of a photon are altered by colliding with a free electron. The change in frequency of a scattered photon results in an energy exchange between the photon and target electron and energy coupling between radiation and matter. Canfield, Howard, and Liang have presented a Monte Carlo method for simulating Compton scattering that models the photon-electron collision kinematics exactly. However, implementing their technique in multiphysics problems that include the effects of radiation-matter energy coupling typically requires evaluating the material temperature at its beginning-of-time-step value. This explicit evaluation can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and present time-step limits that avoid instabilities and nonphysical oscillations by considering a spatially independent, purely scattering radiative-transfer problem. Examining a simplified problem is justified because it isolates the effects of Compton scattering, and existing Monte Carlo techniques can robustly model other physics (such as absorption, emission, sources, and photon streaming). Our analysis begins by simplifying the equations that are solved via Monte Carlo within each time step using the Fokker-Planck approximation. Next, we linearize these approximate equations about an equilibrium solution such that the resulting linearized equations describe perturbations about this equilibrium. We then solve these linearized equations over a time step and determine the corresponding eigenvalues, quantities that can predict the behavior of solutions generated by a Monte Carlo simulation as a function of time-step size and other physical parameters. With these results, we develop our time-step limits. This approach is similar to our recent investigation of time discretizations for the
Perfetti, C.; Martin, W.; Rearden, B.; Williams, M.
2012-07-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)
A comparison of generalized hybrid Monte Carlo methods with and without momentum flip
Akhmatskaya, Elena; Bou-Rabee, Nawaf; Reich, Sebastian
2009-04-01
The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.
A comparison of the Monte Carlo and the flux gradient method for atmospheric diffusion
Lange, R.
1990-05-01
In order to model the dispersal of atmospheric pollutants in the planetary boundary layer, various methods of parameterizing turbulent diffusion have been employed. The purpose of this paper is to use a three-dimensional particle-in-cell transport and diffusion model to compare the Markov chain (Monte Carlo) method of statistical particle diffusion with the deterministic flux gradient (K-theory) method. The two methods are heavily used in the study of atmospheric diffusion under complex conditions, with the Monte Carlo method gaining in popularity partly because of its more direct application of turbulence parameters. The basis of comparison is a data set from night-time drainage flow tracer experiments performed by the US Department of Energy Atmospheric Studies in Complex Terrain (ASCOT) program at the Geysers geothermal region in northern California. The Atmospheric Diffusion Particle-In-Cell (ADPIC) model used is the main model in the Lawrence Livermore National Laboratory emergency response program: Atmospheric Release Advisory Capability (ARAC). As a particle model, it can simulate diffusion in both the flux gradient and Monte Carlo modes. 9 refs., 6 figs.
Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe
NASA Astrophysics Data System (ADS)
Martin, Nicolas
This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic
Monte Carlo method for photon heating using temperature-dependent optical properties.
Slade, Adam Broadbent; Aguilar, Guillermo
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. PMID
Multilevel Monte Carlo methods for computing failure probability of porous media flow systems
NASA Astrophysics Data System (ADS)
Fagerlund, F.; Hellman, F.; Målqvist, A.; Niemi, A.
2016-08-01
We study improvements of the standard and multilevel Monte Carlo method for point evaluation of the cumulative distribution function (failure probability) applied to porous media two-phase flow simulations with uncertain permeability. To illustrate the methods, we study an injection scenario where we consider sweep efficiency of the injected phase as quantity of interest and seek the probability that this quantity of interest is smaller than a critical value. In the sampling procedure, we use computable error bounds on the sweep efficiency functional to identify small subsets of realizations to solve highest accuracy by means of what we call selective refinement. We quantify the performance gains possible by using selective refinement in combination with both the standard and multilevel Monte Carlo method. We also identify issues in the process of practical implementation of the methods. We conclude that significant savings in computational cost are possible for failure probability estimation in a realistic setting using the selective refinement technique, both in combination with standard and multilevel Monte Carlo.
Molecular simulation of shocked materials using the reactive Monte Carlo method.
Brennan, John K; Rice, Betsy M
2002-08-01
We demonstrate the applicability of the reactive Monte Carlo (RxMC) simulation method [J. K. Johnson, A. Z. Panagiotopoulos, and K. E. Gubbins, Mol. Phys. 81, 717 (1994); W. R. Smith and B. Tríska, J. Chem. Phys. 100, 3019 (1994)] for calculating the shock Hugoniot properties of a material. The method does not require interaction potentials that simulate bond breaking or bond formation; it requires only the intermolecular potentials and the ideal-gas partition functions for the reactive species that are present. By performing Monte Carlo sampling of forward and reverse reaction steps, the RxMC method provides information on the chemical equilibria states of the shocked material, including the density of the reactive mixture and the mole fractions of the reactive species. We illustrate the methodology for two simple systems (shocked liquid NO and shocked liquid N2), where we find excellent agreement with experimental measurements. The results show that the RxMC methodology provides an important simulation tool capable of testing models used in current detonation theory predictions. Further applications and extensions of the reactive Monte Carlo method are discussed. PMID:12241148
Numerical simulations of acoustics problems using the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Hanford, Amanda Danforth
In the current study, real gas effects in the propagation of sound waves are simulated using the direct simulation Monte Carlo method for a wide range of systems. This particle method allows for treatment of acoustic phenomena for a wide range of Knudsen numbers, defined as the ratio of molecular mean free path to wavelength. Continuum models such as the Euler and Navier-Stokes equations break down for flows greater than a Knudsen number of approximately 0.05. Continuum models also suffer from the inability to simultaneously model nonequilibrium conditions, diatomic or polyatomic molecules, nonlinearity and relaxation effects and are limited in their range of validity. Therefore, direct simulation Monte Carlo is capable of directly simulating acoustic waves with a level of detail not possible with continuum approaches. The basis of direct simulation Monte Carlo lies within kinetic theory where representative particles are followed as they move and collide with other particles. A parallel, object-oriented DSMC solver was developed for this problem. Despite excellent parallel efficiency, computation time is considerable. Monatomic gases, gases with internal energy, planetary environments, and amplitude effects spanning a large range of Knudsen number have all been modeled with the same method and compared to existing theory. With the direct simulation method, significant deviations from continuum predictions are observed for high Knudsen number flows.
NASA Astrophysics Data System (ADS)
Nakano, Shinya; Suzuki, Kazue; Kawamura, Kenji; Parrenin, Frederic; Higuchi, Tomoyuki
2015-04-01
A technique for estimating the age-depth relationship and its uncertainty in ice cores has been developed. The age-depth relationship is mainly determined by the accumulation of snow at the site of the ice core and the thinning process due to the horizontal stretching and vertical compression of an ice layer. However, both the accumulation process and the thinning process are not fully known. In order to appropriately estimate the age as a function of depth, it is crucial to incorporate observational information into a model describing the accumulation and thinning processes. In the proposed technique, the age as a function of depth is estimated from age markers and time series of δ18O data. The estimation is achieved using a method combining a sequential Monte Carlo method and the Markov chain Monte Carlo method as proposed by Andrieu et al. (2010). In this hybrid method, the posterior distributions for the parameters in the models for the accumulation and thinning processes are basically computed using a way of the standard Metropolis-Hastings method. Meanwhile, sampling from the posterior distribution for the age-depth relationship is achieved by using a sequential Monte Carlo approach at each iteration of the Metropolis-Hastings method. A sequential Monte Carlo method normally suffers from the degeneracy problem, especially in cases that the number of steps is large. However, when it is combined with the Metropolis-Hastings method, the degeneracy problem can be overcome by collecting a large number of samples obtained by many iterations of the Metropolis-Hastings method. We will demonstrate the result obtained by applying the proposed technique to the ice core data from Dome Fuji in Antarctica.
Stability analysis and time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D. Warsa, James S. Lowrie, Robert B.
2010-05-20
A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.
A step beyond the Monte Carlo method in economics: Application of multivariate normal distribution
NASA Astrophysics Data System (ADS)
Kabaivanov, S.; Malechkova, A.; Marchev, A.; Milev, M.; Markovska, V.; Nikolova, K.
2015-11-01
In this paper we discuss the numerical algorithm of Milev-Tagliani [25] used for pricing of discrete double barrier options. The problem can be reduced to accurate valuation of an n-dimensional path integral with probability density function of a multivariate normal distribution. The efficient solution of this problem with the Milev-Tagliani algorithm is a step beyond the classical application of Monte Carlo for option pricing. We explore continuous and discrete monitoring of asset path pricing, compare the error of frequently applied quantitative methods such as the Monte Carlo method and finally analyze the accuracy of the Milev-Tagliani algorithm by presenting the profound research and important results of Honga, S. Leeb and T. Li [16].
Locally activated Monte Carlo method for long-time-scale simulations
NASA Astrophysics Data System (ADS)
Kaukonen, M.; Peräjoki, J.; Nieminen, R. M.; Jungnickel, G.; Frauenheim, Th.
2000-01-01
We present a technique for the structural optimization of atom models to study long time relaxation processes involving different time scales. The method takes advantage of the benefits of both the kinetic Monte Carlo (KMC) and the molecular dynamics simulation techniques. In contrast to ordinary KMC, our method allows for an estimation of a true lower limit for the time scale of a relaxation process. The scheme is fairly general in that neither the typical pathways nor the typical metastable states need to be known prior to the simulation. It is independent of the lattice type and the potential which describes the atomic interactions. It is adopted to study systems with structural and/or chemical inhomogeneity which makes it particularly useful for studying growth and diffusion processes in a variety of physical systems, including crystalline bulk, amorphous systems, surfaces with adsorbates, fluids, and interfaces. As a simple illustration we apply the locally activated Monte Carlo to study hydrogen diffusion in diamond.
Mesh-based Monte Carlo method in time-domain widefield fluorescence molecular tomography
Chen, Jin; Fang, Qianqian
2012-01-01
Abstract. We evaluated the potential of mesh-based Monte Carlo (MC) method for widefield time-gated fluorescence molecular tomography, aiming to improve accuracy in both shape discretization and photon transport modeling in preclinical settings. An optimized software platform was developed utilizing multithreading and distributed parallel computing to achieve efficient calculation. We validated the proposed algorithm and software by both simulations and in vivo studies. The results establish that the optimized mesh-based Monte Carlo (mMC) method is a computationally efficient solution for optical tomography studies in terms of both calculation time and memory utilization. The open source code, as part of a new release of mMC, is publicly available at http://mcx.sourceforge.net/mmc/. PMID:23224008
Computing the principal eigenelements of some linear operators using a branching Monte Carlo method
Lejay, Antoine Maire, Sylvain
2008-12-01
In earlier work, we developed a Monte Carlo method to compute the principal eigenvalue of linear operators, which was based on the simulation of exit times. In this paper, we generalize this approach by showing how to use a branching method to improve the efficacy of simulating large exit times for the purpose of computing eigenvalues. Furthermore, we show that this new method provides a natural estimation of the first eigenfunction of the adjoint operator. Numerical examples of this method are given for the Laplace operator and an homogeneous neutron transport operator.
NASA Astrophysics Data System (ADS)
Liu, Quan; Ramanujam, Nirmala
2007-04-01
A scaling Monte Carlo method has been developed to calculate diffuse reflectance from multilayered media with a wide range of optical properties in the ultraviolet-visible wavelength range. This multilayered scaling method employs the photon trajectory information generated from a single baseline Monte Carlo simulation of a homogeneous medium to scale the exit distance and exit weight of photons for a new set of optical properties in the multilayered medium. The scaling method is particularly suited to simulating diffuse reflectance spectra or creating a Monte Carlo database to extract optical properties of layered media, both of which are demonstrated in this paper. Particularly, it was found that the root-mean-square error (RMSE) between scaled diffuse reflectance, for which the anisotropy factor and refractive index in the baseline simulation were, respectively, 0.9 and 1.338, and independently simulated diffuse reflectance was less than or equal to 5% for source-detector separations from 200 to 1500 μm when the anisotropy factor of the top layer in a two-layered epithelial tissue model was varied from 0.8 to 0.99; in contrast, the RMSE was always less than 5% for all separations (from 0 to 1500 μm) when the anisotropy factor of the bottom layer was varied from 0.7 to 0.99. When the refractive index of either layer in the two-layered tissue model was varied from 1.3 to 1.4, the RMSE was less than 10%. The scaling method can reduce computation time by more than 2 orders of magnitude compared with independent Monte Carlo simulations.
Monte Carlo Methods in Materials Science Based on FLUKA and ROOT
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor
2003-01-01
A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the
A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems
Keady, K P; Brantley, P
2010-03-04
Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model
Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy.
Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas
2012-04-01
Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing. PMID:22559695
Quasi-Monte Carlo methods for lattice systems: A first look
NASA Astrophysics Data System (ADS)
Jansen, K.; Leovey, H.; Ammon, A.; Griewank, A.; Müller-Preussker, M.
2014-03-01
We investigate the applicability of quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this behavior for certain problems to N-1, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling. Catalogue identifier: AERJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence version 3 No. of lines in distributed program, including test data, etc.: 67759 No. of bytes in distributed program, including test data, etc.: 2165365 Distribution format: tar.gz Programming language: C and C++. Computer: PC. Operating system: Tested on GNU/Linux, should be portable to other operating systems with minimal efforts. Has the code been vectorized or parallelized?: No RAM: The memory usage directly scales with the number of samples and dimensions: Bytes used = “number of samples” × “number of dimensions” × 8 Bytes (double precision). Classification: 4.13, 11.5, 23. External routines: FFTW 3 library (http://www.fftw.org) Nature of problem: Certain physical models formulated as a quantum field theory through the Feynman path integral, such as quantum chromodynamics, require a non-perturbative treatment of the path integral. The only known approach that achieves this is the lattice regularization. In this formulation the path integral is discretized to a finite, but very high dimensional integral. So far only Monte
NASA Astrophysics Data System (ADS)
Ebru Ermis, Elif; Celiktas, Cuneyt
2015-07-01
Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals) were carried out by means of FLUKA Monte Carlo (MC) method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST) values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.
New approach to dynamical Monte Carlo methods: application to an epidemic model
NASA Astrophysics Data System (ADS)
Aiello, O. E.; da Silva, M. A. A.
2003-09-01
In this work we introduce a new approach to dynamical Monte Carlo methods to simulate Markovian processes. We apply this approach to formulate and study an epidemic generalized SIRS model. The results are in excellent agreement with the forth order Runge-Kutta Method in a region of deterministic solution. We also show that purely local interactions reproduce a poissonian-like process at mesoscopic level. The simulations for this case are checked self-consistently using a stochastic version of the Euler Method.
NASA Astrophysics Data System (ADS)
Plotnikov, M. Yu.; Shkarupa, E. V.
2015-11-01
Presently, the direct simulation Monte Carlo (DSMC) method is widely used for solving rarefied gas dynamics problems. As applied to steady-state problems, a feature of this method is the use of dependent sample values of random variables for the calculation of macroparameters of gas flows. A new combined approach to estimating the statistical error of the method is proposed that does not practically require additional computations, and it is applicable for any degree of probabilistic dependence of sample values. Features of the proposed approach are analyzed theoretically and numerically. The approach is tested using the classical Fourier problem and the problem of supersonic flow of rarefied gas through permeable obstacle.
Electron density of states of Fe-based superconductors: Quantum trajectory Monte Carlo method
NASA Astrophysics Data System (ADS)
Kashurnikov, V. A.; Krasavin, A. V.; Zhumagulov, Ya. V.
2016-03-01
The spectral and total electron densities of states in two-dimensional FeAs clusters, which simulate iron-based superconductors, have been calculated using the generalized quantum Monte Carlo algorithm within the full two-orbital model. Spectra have been reconstructed by solving the integral equation relating the Matsubara Green's function and spectral density by the method combining the gradient descent and Monte Carlo algorithms. The calculations have been performed for clusters with dimensions up to 10 × 10 FeAs cells. The profiles of the Fermi surface for the entire Brillouin zone have been presented in the quasiparticle approximation. Data for the total density of states near the Fermi level have been obtained. The effect of the interaction parameter, size of the cluster, and temperature on the spectrum of excitations has been studied.
Takahashi, F; Endo, A
2007-01-01
A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure. PMID:17510203
Visual improvement for bad handwriting based on Monte-Carlo method
NASA Astrophysics Data System (ADS)
Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua
2014-03-01
A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.
Analysis of single Monte Carlo methods for prediction of reflectance from turbid media
Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan
2011-01-01
Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
Hey, Jody; Nielsen, Rasmus
2007-02-20
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
Analysis of single Monte Carlo methods for prediction of reflectance from turbid media.
Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan
2011-09-26
Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904
Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
NASA Astrophysics Data System (ADS)
Chen, Chaobin; Huang, Qunying; Wu, Yican
2005-04-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
A general method for spatially coarse-graining Metropolis Monte Carlo simulations onto a lattice
NASA Astrophysics Data System (ADS)
Liu, Xiao; Seider, Warren D.; Sinno, Talid
2013-03-01
A recently introduced method for coarse-graining standard continuous Metropolis Monte Carlo simulations of atomic or molecular fluids onto a rigid lattice of variable scale [X. Liu, W. D. Seider, and T. Sinno, Phys. Rev. E 86, 026708 (2012)], 10.1103/PhysRevE.86.026708 is further analyzed and extended. The coarse-grained Metropolis Monte Carlo technique is demonstrated to be highly consistent with the underlying full-resolution problem using a series of detailed comparisons, including vapor-liquid equilibrium phase envelopes and spatial density distributions for the Lennard-Jones argon and simple point charge water models. In addition, the principal computational bottleneck associated with computing a coarse-grained interaction function for evolving particle positions on the discretized domain is addressed by the introduction of new closure approximations. In particular, it is shown that the coarse-grained potential, which is generally a function of temperature and coarse-graining level, can be computed at multiple temperatures and scales using a single set of free energy calculations. The computational performance of the method relative to standard Monte Carlo simulation is also discussed.
Markov chain Monte Carlo methods for statistical analysis of RF photonic devices.
Piels, Molly; Zibar, Darko
2016-02-01
The microwave reflection coefficient is commonly used to characterize the impedance of high-speed optoelectronic devices. Error and uncertainty in equivalent circuit parameters measured using this data are systematically evaluated. The commonly used nonlinear least-squares method for estimating uncertainty is shown to give unsatisfactory and incorrect results due to the nonlinear relationship between the circuit parameters and the measured data. Markov chain Monte Carlo methods are shown to provide superior results, both for individual devices and for assessing within-die variation. PMID:26906783
Temperature-extrapolation method for Implicit Monte Carlo - Radiation hydrodynamics calculations
McClarren, R. G.; Urbatsch, T. J.
2013-07-01
We present a method for implementing temperature extrapolation in Implicit Monte Carlo solutions to radiation hydrodynamics problems. The method is based on a BDF-2 type integration to estimate a change in material temperature over a time step. We present results for radiation only problems in an infinite medium and for a 2-D Cartesian hohlraum problem. Additionally, radiation hydrodynamics simulations are presented for an RZ hohlraum problem and a related 3D problem. Our results indicate that improvements in noise and general behavior are possible. We present considerations for future investigations and implementations. (authors)
Quantum Monte Carlo method using phase-free random walks with slater determinants.
Zhang, Shiwei; Krakauer, Henry
2003-04-01
We develop a quantum Monte Carlo method for many fermions using random walks in the space of Slater determinants. An approximate approach is formulated with a trial wave function |Psi(T)> to control the phase problem. Using a plane-wave basis and nonlocal pseudopotentials, we apply the method to Be, Si, and P atoms and dimers, and to bulk Si supercells. Single-determinant wave functions from density functional theory calculations were used as |Psi(T)> with no additional optimization. The calculated binding energies of dimers and cohesive energy of bulk Si are in excellent agreement with experiments and are comparable to the best existing theoretical results. PMID:12689312
McClarren, Ryan G. Urbatsch, Todd J.
2009-09-01
In this paper we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as {delta}t{yields}{infinity} as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method can avoid the nonphysical overheating that occurs in standard IMC when the time step is large. The method also leads to decreased noise in the material temperature at the cost of a potential increase in the radiation temperature noise.
An implicit Monte Carlo method for simulation of impurity transport in divertor plasma
Suzuki, Akiko; Hayashi, Nobuhiko; Hatayama, Akiyoshi
1997-02-01
A new {open_quotes}implicit{close_quotes} Monte Carlo (IMC) method has been developed to simulate ionization and recombination processes of impurity ions in divertor plasmas. The IMC method takes into account many ionization and recombination processes during a time step {Delta}t. The time step is not limited by a condition, {Delta}t {much_lt} {tau}{sub min} ({tau}{sub min}; the minimum characteristic time of atomic processes), which is forced to be adopted in conventional Monte Carlo methods. We incorporate this method into a one-dimensional impurity transport model. In this transport calculation, impurity ions are followed with the time step about 10 times larger than that used in conventional methods. The average charge state of impurities, (Z), and the radiative cooling rate, L(T{sub e}), are calculated at the electron temperature T{sub e} in divertor plasmas. These results are compared with those obtained from the simple noncoronal model. 10 refs., 7 figs.
Mcclarren, Ryan G; Urbatsch, Todd J
2008-01-01
In this note we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as {Delta}t{yields}{infinity} as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method alleviates both the nonphysical overheating that occurs in standard IMC when the time step is large and significantly diminishes the statistical noise in the solution.
An Automated, Multi-Step Monte Carlo Burnup Code System.
Energy Science and Technology Software Center (ESTSC)
2003-07-14
Version 02 MONTEBURNS Version 2 calculates coupled neutronic/isotopic results for nuclear systems and produces a large number of criticality and burnup results based on various material feed/removal specifications, power(s), and time intervals. MONTEBURNS is a fully automated tool that links the LANL MCNP Monte Carlo transport code with a radioactive decay and burnup code. Highlights on changes to Version 2 are listed in the transmittal letter. Along with other minor improvements in MONTEBURNS Version 2,more » the option was added to use CINDER90 instead of ORIGEN2 as the depletion/decay part of the system. CINDER90 is a multi-group depletion code developed at LANL and is not currently available from RSICC. This MONTEBURNS release was tested with various combinations of CCC-715/MCNPX 2.4.0, CCC-710/MCNP5, CCC-700/MCNP4C, CCC-371/ORIGEN2.2, ORIGEN2.1 and CINDER90. Perl is required software and is not included in this distribution. MCNP, ORIGEN2, and CINDER90 are not included.« less
An Automated, Multi-Step Monte Carlo Burnup Code System.
TRELLUE, HOLLY R.
2003-07-14
Version 02 MONTEBURNS Version 2 calculates coupled neutronic/isotopic results for nuclear systems and produces a large number of criticality and burnup results based on various material feed/removal specifications, power(s), and time intervals. MONTEBURNS is a fully automated tool that links the LANL MCNP Monte Carlo transport code with a radioactive decay and burnup code. Highlights on changes to Version 2 are listed in the transmittal letter. Along with other minor improvements in MONTEBURNS Version 2, the option was added to use CINDER90 instead of ORIGEN2 as the depletion/decay part of the system. CINDER90 is a multi-group depletion code developed at LANL and is not currently available from RSICC. This MONTEBURNS release was tested with various combinations of CCC-715/MCNPX 2.4.0, CCC-710/MCNP5, CCC-700/MCNP4C, CCC-371/ORIGEN2.2, ORIGEN2.1 and CINDER90. Perl is required software and is not included in this distribution. MCNP, ORIGEN2, and CINDER90 are not included.
NASA Astrophysics Data System (ADS)
Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan
2016-04-01
We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.
Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Lan, Shiwei; Bui-Thanh, Tan; Christie, Mike; Girolami, Mark
2016-03-01
The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows have been introduced to more effectively locally explore the configuration space of the posterior measure. However, obtaining such geometric quantities usually requires extensive computational effort and despite their effectiveness affects the applicability of these geometrically-based Monte Carlo methods. In this paper we explore one way to address this issue by the construction of an emulator of the model from which all geometric objects can be obtained in a much more computationally feasible manner. The main concept is to approximate the geometric quantities using a Gaussian Process emulator which is conditioned on a carefully chosen design set of configuration points, which also determines the quality of the emulator. To this end we propose the use of statistical experiment design methods to refine a potentially arbitrarily initialized design online without destroying the convergence of the resulting Markov chain to the desired invariant measure. The practical examples considered in this paper provide a demonstration of the significant improvement possible in terms of computational loading suggesting this is a promising avenue of further development.
Parsons, T.
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Parsons, Tom
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
NASA Astrophysics Data System (ADS)
Parsons, Tom
2008-03-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
NASA Astrophysics Data System (ADS)
Bianco, F. B.; Modjaz, M.; Oh, S. M.; Fierroz, D.; Liu, Y. Q.; Kewley, L.; Graur, O.
2016-07-01
We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity calibrators, based on the original IDL code of Kewley and Dopita (2002) with updates from Kewley and Ellison (2008), and expanded to include more recently developed calibrators. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios (referred to as indicators) in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo sampling, better characterizes the statistical oxygen abundance confidence region including the effect due to the propagation of observational uncertainties. These uncertainties are likely to dominate the error budget in the case of distant galaxies, hosts of cosmic explosions. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 15 metallicity calibrators simultaneously, as well as for E(B- V) , and estimates their median values and their 68% confidence regions. We provide the option of outputting the full Monte Carlo distributions, and their Kernel Density estimates. We test our code on emission line measurements from a sample of nearby supernova host galaxies (z < 0.15) and compare our metallicity results with those from previous methods. We show that our metallicity estimates are consistent with previous methods but yield smaller statistical uncertainties. It should be noted that systematic uncertainties are not taken into account. We also offer visualization tools to assess the spread of the oxygen abundance in the different calibrators, as well as the shape of the estimated oxygen abundance distribution in each calibrator, and develop robust metrics for determining the appropriate Monte Carlo sample size. The code
Bahreyni Toossi, Mohammad Taghi; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Meigooni, Ali Soleimani
2012-01-01
Background Dosimetric characteristics of a high dose rate (HDR) GZP6 Co-60 brachytherapy source have been evaluated following American Association of Physicists in MedicineTask Group 43U1 (AAPM TG-43U1) recommendations for their clinical applications. Materials and methods MCNP-4C and MCNPX Monte Carlo codes were utilized to calculate dose rate constant, two dimensional (2D) dose distribution, radial dose function and 2D anisotropy function of the source. These parameters of this source are compared with the available data for Ralstron 60Co and microSelectron192Ir sources. Besides, a superimposition method was developed to extend the obtained results for the GZP6 source No. 3 to other GZP6 sources. Results The simulated value for dose rate constant for GZP6 source was 1.104±0.03 cGyh-1U-1. The graphical and tabulated radial dose function and 2D anisotropy function of this source are presented here. The results of these investigations show that the dosimetric parameters of GZP6 source are comparable to those for the Ralstron source. While dose rate constant for the two 60Co sources are similar to that for the microSelectron192Ir source, there are differences between radial dose function and anisotropy functions. Radial dose function of the 192Ir source is less steep than both 60Co source models. In addition, the 60Co sources are showing more isotropic dose distribution than the 192Ir source. Conclusions The superimposition method is applicable to produce dose distributions for other source arrangements from the dose distribution of a single source. The calculated dosimetric quantities of this new source can be introduced as input data to the GZP6 treatment planning system (TPS) and to validate the performance of the TPS. PMID:23077455
NASA Astrophysics Data System (ADS)
Kienle, Alwin; Hibst, Raimund
1996-05-01
Treatment of leg telangiectasia with a pulsed laser is investigated theoretically. The Monte Carlo method is used to calculate light propagation and absorption in the epidermis, dermis and the ectatic blood vessel. Calculations are made for different diameters and depths of the vessel in the dermis. In addition, the scattering and the absorption coefficients of the dermis are varied. On the basis of the considered damage model it is found that for vessels with diameters between 0.3 mm and 0.5 mm wavelengths about 600 nm are optimal to achieve selective photothermolysis.
Refinement of overlapping local/global iteration method based on Monte Carlo/p-CMFD calculations
Jo, Y.; Yun, S.; Cho, N. Z.
2013-07-01
In this paper, the overlapping local/global (OLG) iteration method based on Monte Carlo/p-CMFD calculations is refined in two aspects. One is the consistent use of estimators to generate homogenized scattering cross sections. Another is that the incident or exiting angular interval is divided into multi-angular bins to modulate albedo boundary conditions for local problems. Numerical tests show that, compared to the one angle bin case in a previous study, the four angle bin case shows significantly improved results. (authors)
Electronic structure of solid FeO at high pressures by quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Kolorenč, Jindřich; Mitas, Lubos
2010-02-01
We determine equation of state of stoichiometric FeO by employing the diffusion Monte Carlo method. The fermionic nodes of the many-body wave function are fixed by a single Slater determinant of one-particle orbitals extracted from spin-unrestricted Kohn-Sham equations utilizing a hybrid exchange-correlation functional. The calculated ambient pressure properties agree very well with available experimental data. At approximately 65 GPa, the atomic lattice is found to change from the rocksalt B1 to the NiAs-type inverse B8 structure.
Green, P. L.; Worden, K.
2015-01-01
In this paper, the authors outline the general principles behind an approach to Bayesian system identification and highlight the benefits of adopting a Bayesian framework when attempting to identify models of nonlinear dynamical systems in the presence of uncertainty. It is then described how, through a summary of some key algorithms, many of the potential difficulties associated with a Bayesian approach can be overcome through the use of Markov chain Monte Carlo (MCMC) methods. The paper concludes with a case study, where an MCMC algorithm is used to facilitate the Bayesian system identification of a nonlinear dynamical system from experimentally observed acceleration time histories. PMID:26303916
Percolation conductivity of Penrose tiling by the transfer-matrix Monte Carlo method
NASA Astrophysics Data System (ADS)
Babalievski, Filip V.
1992-03-01
A generalization of the Derrida and Vannimenus transfer-matrix Monte Carlo method has been applied to calculations of the percolation conductivity in a Penrose tiling. Strips with a length~10 4 and widths from 3 to 19 have been used. Disregarding the differences for smaller strip widths (up to 7), the results show that the percolative conductivity of a Penrose tiling has a value very close to that of a square lattice. The estimate for the percolation transport exponent once more confirms the universality conjecture for the 0-1 distribution of resistors.
Microlens assembly error analysis for light field camera based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping
2016-08-01
This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.
NASA Astrophysics Data System (ADS)
Grolemund, Daniel Lee
The dissertation concerns the extraction, via signal processing, of structural information from the scattering of low megahertz, low power ultrasonic waves in two specific media of great practical interest--fiber reinforced composites and soft biological tissue. In fiber reinforced composites, this work represents the first measurement of second-order statistics in porous laminates, and the first application of Monte Carlo methods to acoustical scattering in composites. A numerical model of porous composites backscatter was derived which is suitable for direct numerical implementation. The model treats the total backscattered field as the result of a two-mode scattering process. In the first mode, the void-free composite is treated as a continuously varying medium in which the density and compressibility are functions of position. The second mode is the distribution of gas voids that failed to escape the material before gel, and are dealt with as discrete Rayleigh scatterers. Convolution techniques were developed that allowed the numerical model to reproduce the long range order seen in the void-free composite. The results of the Monte Carlo derivation were coded, and simulations run with data sets that duplicate the properties of the composite samples used in the study. In the area of tissue characterization, two leading methods have been proposed to extract structural data from the raw backscattered waveforms. Both techniques were developed from an understanding of the periodicities created by semi-regularly spaced, coherent scatterers. In the second half of the dissertation, a complete analytical and numerical treatment of these two techniques was done from a first principles approach. Computer simulations were performed to determine the general behavior of the algorithms. The main focus is on the envelope correlation spectrum, or ECS. Monte Carlo methods were employed to examine the signal-to-noise ratio of the ECS in terms of the variances of the backscattered
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
NASA Astrophysics Data System (ADS)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
Differential Monte Carlo method for computing seismogram envelopes and their partial derivatives
NASA Astrophysics Data System (ADS)
Takeuchi, Nozomu
2016-05-01
We present an efficient method that is applicable to waveform inversions of seismogram envelopes for structural parameters describing scattering properties in the Earth. We developed a differential Monte Carlo method that can simultaneously compute synthetic envelopes and their partial derivatives with respect to structural parameters, which greatly reduces the required CPU time. Our method has no theoretical limitations to apply to the problems with anisotropic scattering in a heterogeneous background medium. The effects of S wave polarity directions and phase differences between SH and SV components are taken into account. Several numerical examples are presented to show that the intrinsic and scattering attenuation at the depth range of the asthenosphere have different impacts on the observed seismogram envelopes, thus suggesting that our method can potentially be applied to inversions for scattering properties in the deep Earth.
Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method
NASA Astrophysics Data System (ADS)
Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin
2015-12-01
The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.
Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method
Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin
2015-12-31
The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.
Hunter, J. L.; Sutton, T. M.
2013-07-01
In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amount of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)
NASA Astrophysics Data System (ADS)
Iakovidis, S.; Apostolidis, C.; Samaras, T.
2015-04-01
The objective of the present work is the application of the Monte Carlo method (GUMS1) for evaluating uncertainty in electromagnetic field measurements and the comparison of the results with the ones obtained using the 'standard' method (GUM). In particular, the two methods are applied in order to evaluate the field measurement uncertainty using a frequency selective radiation meter and the Total Exposure Quotient (TEQ) uncertainty. Comparative results are presented in order to highlight cases where GUMS1 results deviate significantly from the ones obtained using GUM, such as the presence of a non-linear mathematical model connecting the inputs with the output quantity (case of the TEQ model) or the presence of a dominant nonnormal distribution of an input quantity (case of U-shaped mismatch uncertainty). The deviation of the results obtained from the two methods can even lead to different decisions regarding the conformance with the exposure reference levels.
Ghulghazaryan, Ruben G; Hayryan, Shura; Hu, Chin-Kun
2007-02-01
An efficient combination of the Wang-Landau and transition matrix Monte Carlo methods for protein and peptide simulations is described. At the initial stage of simulation the algorithm behaves like the Wang-Landau algorithm, allowing to sample the entire interval of energies, and at the later stages, it behaves like transition matrix Monte Carlo method and has significantly lower statistical errors. This combination allows to achieve fast convergence to the correct values of density of states. We propose that the violation of TTT identities may serve as a qualitative criterion to check the convergence of density of states. The simulation process can be parallelized by cutting the entire interval of simulation into subintervals. The violation of ergodicity in this case is discussed. We test the algorithm on a set of peptides of different lengths and observe good statistical convergent properties for the density of states. We believe that the method is of general nature and can be used for simulations of other systems with either discrete or continuous energy spectrum. PMID:17195159
A new Monte Carlo method for dynamical evolution of non-spherical stellar systems
NASA Astrophysics Data System (ADS)
Vasiliev, Eugene
2015-01-01
We have developed a novel Monte Carlo method for simulating the dynamical evolution of stellar systems in arbitrary geometry. The orbits of stars are followed in a smooth potential represented by a basis-set expansion and perturbed after each timestep using local velocity diffusion coefficients from the standard two-body relaxation theory. The potential and diffusion coefficients are updated after an interval of time that is a small fraction of the relaxation time, but may be longer than the dynamical time. Thus, our approach is a bridge between the Spitzer's formulation of the Monte Carlo method and the temporally smoothed self-consistent field method. The primary advantages are the ability to follow the secular evolution of shape of the stellar system, and the possibility of scaling the amount of two-body relaxation to the necessary value, unrelated to the actual number of particles in the simulation. Possible future applications of this approach in galaxy dynamics include the problem of consumption of stars by a massive black hole in a non-spherical galactic nucleus, evolution of binary supermassive black holes, and the influence of chaos on the shape of galaxies, while for globular clusters it may be used for studying the influence of rotation.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
NASA Astrophysics Data System (ADS)
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is a highly accurate Quantum Monte Carlo method for electronic structure calculations of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency on parallel machines. This step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm: a simple renumbering of the MPI ranks based on proximity and a space filling curve significantly improves the MPI Allgather performance. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL show up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that require load balancing.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.
NASA Astrophysics Data System (ADS)
Umezawa, Naoto; Tsuneyuki, Shinji; Ohno, Takahisa; Shiraishi, Kenji; Chikyow, Toyohiro
2005-03-01
The transcorrelated (TC) method is a useful approach to optimize the Jastrow-Slater-type many-body wave function FD. The basic idea of the TC method [1] is based on the similarity transformation of a many-body Hamiltonian H with respect to the Jastrow factor F: HTC=frac1F H F in order to incorporate the correlation effect into HTC. Both the F and D are optimized by minimizing the variance ^2=|Hrm TCD - E D |^2 d^3N x. The optimization for F is implemented by the variational Monte Carlo calculation, and D is determined by the TC self-consistent-field equation for the one-body wave functions φμ(x), which is derived from the functional derivative of ^2 with respect to φmu(x). In this talk, we will present the results given by the transcorrelated variational Monte Carlo (TC-VMC) method for the ground state [2] and the excited states of atoms [3]. [1]S. F. Boys and N. C. Handy, Proc. Roy. Soc. A, 309, 209; 310, 43; 310, 63; 311, 309 (1969). [2]N. Umezawa and S. Tsuneyuki, J. Chem. Phys. 119, 10015 (2003). [3]N. Umezawa and S. Tsuneyuki, J. Chem. Phys. 121, 7070 (2004).
NASA Astrophysics Data System (ADS)
Zou, Jinming; Saven, Jeffery G.
2003-02-01
For complex multidimensional systems, Monte Carlo methods are useful for sampling probable regions of a configuration space and, in the context of annealing, for determining "low energy" or "high scoring" configurations. Such methods have been used in protein design as means to identify amino acid sequences that are energetically compatible with a particular backbone structure. As with many other applications of Monte Carlo methods, such searches can be inefficient if trial configurations (protein sequences) in the Markov chain are chosen randomly. Here a mean-field biased Monte Carlo method (MFBMC) is presented and applied to designing and sampling protein sequences. The MFBMC method uses predetermined sequence identity probabilities wi(α) to bias the sequence selection. The wi(α) are calculated using a self-consistent, mean-field theory that can estimate the number and composition of sequences having predetermined values of energetically related foldability criteria. The MFBMC method is applied to both a simple protein model, the 27-mer lattice model, and an all-atom protein model. Compared to conventional Monte Carlo (MC) and configurational bias Monte Carlo (BMC), the MFBMC method converges faster to low energy sequences and samples such sequences more efficiently. The MFBMC method also tolerates faster cooling rates than the MC and BMC methods. The MFBMC method can be applied not only to protein sequence search, but also to a wide variety of polymeric and condensed phase systems.
Puibasset, Joël
2005-04-01
The effect of confinement on phase behavior of simple fluids is still an area of intensive research. In between experiment and theory, molecular simulation is a powerful tool to study the effect of confinement in realistic porous materials, containing some disorder. Previous simulation works aiming at establishing the phase diagram of a confined Lennard-Jones-type fluid, concentrated on simple pore geometries (slits or cylinders). The development of the Gibbs ensemble Monte Carlo technique by Panagiotopoulos [Mol. Phys. 61, 813 (1987)], greatly favored the study of such simple geometries for two reasons. First, the technique is very efficient to calculate the phase diagram, since each run (at a given temperature) converges directly to an equilibrium between a gaslike and a liquidlike phase. Second, due to volume exchange procedure between the two phases, at least one invariant direction of space is required for applicability of this method, which is the case for slits or cylinders. Generally, the introduction of some disorder in such simple pores breaks the initial invariance in one of the space directions and prevents to work in the Gibbs ensemble. The simulation techniques for such disordered systems are numerous (grand canonical Monte Carlo, molecular dynamics, histogram reweighting, N-P-T+test method, Gibbs-Duhem integration procedure, etc.). However, the Gibbs ensemble technique, which gives directly the coexistence between phases, was never generalized to such systems. In this work, we focus on two weakly disordered pores for which a modified Gibbs ensemble Monte Carlo technique can be applied. One of the pores is geometrically undulated, whereas the second is cylindrical but presents a chemical variation which gives rise to a modulation of the wall potential. In the first case almost no change in the phase diagram is observed, whereas in the second strong modifications are reported. PMID:15847492
The applicability of certain Monte Carlo methods to the analysis of interacting polymers
Krapp, D.M. Jr.
1998-05-01
The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.
Self-optimizing Monte Carlo method for nuclear well logging simulation
NASA Astrophysics Data System (ADS)
Liu, Lianyan
1997-09-01
In order to increase the efficiency of Monte Carlo simulation for nuclear well logging problems, a new method has been developed for variance reduction. With this method, an importance map is generated in the regular Monte Carlo calculation as a by-product, and the importance map is later used to conduct the splitting and Russian roulette for particle population control. By adopting a spatial mesh system, which is independent of physical geometrical configuration, the method allows superior user-friendliness. This new method is incorporated into the general purpose Monte Carlo code MCNP4A through a patch file. Two nuclear well logging problems, a neutron porosity tool and a gamma-ray lithology density tool are used to test the performance of this new method. The calculations are sped up over analog simulation by 120 and 2600 times, for the neutron porosity tool and for the gamma-ray lithology density log, respectively. The new method enjoys better performance by a factor of 4~6 times than that of MCNP's cell-based weight window, as per the converged figure-of-merits. An indirect comparison indicates that the new method also outperforms the AVATAR process for gamma-ray density tool problems. Even though it takes quite some time to generate a reasonable importance map from an analog run, a good initial map can create significant CPU time savings. This makes the method especially suitable for nuclear well logging problems, since one or several reference importance maps are usually available for a given tool. Study shows that the spatial mesh sizes should be chosen according to the mean-free-path. The overhead of the importance map generator is 6% and 14% for neutron and gamma-ray cases. The learning ability towards a correct importance map is also demonstrated. Although false-learning may happen, physical judgement can help diagnose with contributon maps. Calibration and analysis are performed for the neutron tool and the gamma-ray tool. Due to the fact that a very
Stochastic modeling of polarized light scattering using a Monte Carlo based stencil method.
Sormaz, Milos; Stamm, Tobias; Jenny, Patrick
2010-05-01
This paper deals with an efficient and accurate simulation algorithm to solve the vector Boltzmann equation for polarized light transport in scattering media. The approach is based on a stencil method, which was previously developed for unpolarized light scattering and proved to be much more efficient (speedup factors of up to 10 were reported) than the classical Monte Carlo while being equally accurate. To validate what we believe to be the new stencil method, a substrate composed of spherical non-absorbing particles embedded in a non-absorbing medium was considered. The corresponding single scattering Mueller matrix, which is required to model scattering of polarized light, was determined based on the Lorenz-Mie theory. From simulations of a reflected polarized laser beam, the Mueller matrix of the substrate was computed and compared with an established reference. The agreement is excellent, and it could be demonstrated that a significant speedup of the simulations is achieved due to the stencil approach compared with the classical Monte Carlo. PMID:20448777
Efficient continuous-time quantum Monte Carlo method for the ground state of correlated fermions
NASA Astrophysics Data System (ADS)
Wang, Lei; Iazzi, Mauro; Corboz, Philippe; Troyer, Matthias
2015-06-01
We present the ground state extension of the efficient continuous-time quantum Monte Carlo algorithm for lattice fermions of M. Iazzi and M. Troyer, Phys. Rev. B 91, 241118 (2015), 10.1103/PhysRevB.91.241118. Based on continuous-time expansion of an imaginary-time projection operator, the algorithm is free of systematic error and scales linearly with projection time and interaction strength. Compared to the conventional quantum Monte Carlo methods for lattice fermions, this approach has greater flexibility and is easier to combine with powerful machinery such as histogram reweighting and extended ensemble simulation techniques. We discuss the implementation of the continuous-time projection in detail using the spinless t -V model as an example and compare the numerical results with exact diagonalization, density matrix renormalization group, and infinite projected entangled-pair states calculations. Finally we use the method to study the fermionic quantum critical point of spinless fermions on a honeycomb lattice and confirm previous results concerning its critical exponents.
Adapting phase-switch Monte Carlo method for flexible organic molecules
NASA Astrophysics Data System (ADS)
Bridgwater, Sally; Quigley, David
2014-03-01
The role of cholesterol in lipid bilayers has been widely studied via molecular simulation, however, there has been relatively little work on crystalline cholesterol in biological environments. Recent work has linked the crystallisation of cholesterol in the body with heart attacks and strokes. Any attempt to model this process will require new models and advanced sampling methods to capture and quantify the subtle polymorphism of solid cholesterol, in which two crystalline phases are separated by a phase transition close to body temperature. To this end, we have adapted phase-switch Monte Carlo for use with flexible molecules, to calculate the free energy between crystal polymorphs to a high degree of accuracy. The method samples an order parameter , which divides a displacement space for the N molecules, into regions energetically favourable for each polymorph; which is traversed using biased Monte Carlo. Results for a simple model of butane will be presented, demonstrating that conformational flexibility can be correctly incorporated within a phase-switching scheme. Extension to a coarse grained model of cholesterol and the resulting free energies will be discussed.
Investigation of a New Monte Carlo Method for the Transitional Gas Flow
Luo, X.; Day, Chr.
2011-05-20
The Direct Simulation Monte Carlo method (DSMC) is well developed for rarefied gas flow in transition flow regime when 0.01
A Deterministic-Monte Carlo Hybrid Method for Time-Dependent Neutron Transport Problems
Justin Pounders; Farzad Rahnema
2001-10-01
A new deterministic-Monte Carlo hybrid solution technique is derived for the time-dependent transport equation. This new approach is based on dividing the time domain into a number of coarse intervals and expanding the transport solution in a series of polynomials within each interval. The solutions within each interval can be represented in terms of arbitrary source terms by using precomputed response functions. In the current work, the time-dependent response function computations are performed using the Monte Carlo method, while the global time-step march is performed deterministically. This work extends previous work by coupling the time-dependent expansions to space- and angle-dependent expansions to fully characterize the 1D transport response/solution. More generally, this approach represents and incremental extension of the steady-state coarse-mesh transport method that is based on global-local decompositions of large neutron transport problems. An example of a homogeneous slab is discussed as an example of the new developments.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Applications of a Monte Carlo whole-core microscopic depletion method
Hutton, J.L.; Butement, A.W.; Watt, S.; Shadbolt, R.D.
1995-12-31
In the WIMS-6 (Ref. 1) reactor physics program scheme a three-dimensional microscopic depletion method has been developed using Monte Carlo fluxes. Together with microscopic cross sections, these give nuclide reaction rates, which are used to solve nuclide depletion equations for each region. An extension of the method, enabling rapid whole-core calculations, has been implemented in the long-established companion code MONK5W. Predictions at successive depletion time steps are based on a calculational route where both geometry and cross sections are accurately represented, providing a reliable and independent approach for benchmarking other methods. Newly developed tracking and storage procedures in MONK5W enable whole core burnup modeling on a desktop computer. Theory and applications are presented in this paper.
Uncertainty analysis using Monte Carlo method in the measurement of phase by ESPI
Anguiano Morales, Marcelino; Martinez, Amalia; Rayas, J. A.; Cordero, Raul R.
2008-04-15
A method for simultaneously measuring whole field in-plane displacements by using optical fiber and based on the dual-beam illumination principle electronic speckle pattern interferometry (ESPI) is presented in this paper. A set of single mode optical fibers and beamsplitter are employed to split the laser beam into four beams of equal intensity.One pair of fibers is utilized to illuminate the sample in the horizontal plane so it is sensitive only to horizontal in-plane displacement. Another pair of optical fibers is set to be sensitive only to vertical in-plane displacement. Each pair of optical fibers differs in longitude to avoid unwanted interference. By means of a Fourier-transform method of fringe-pattern analysis (Takeda method), we can obtain the quantitative data of whole field displacements. We found the uncertainty associated with the phases by mean of Monte Carlo-based technique.
An asymptotic preserving Monte Carlo method for the multispecies Boltzmann equation
NASA Astrophysics Data System (ADS)
Zhang, Bin; Liu, Hong; Jin, Shi
2016-01-01
An asymptotic preserving (AP) scheme is efficient in solving multiscale kinetic equations with a wide range of the Knudsen number. In this paper, we generalize the asymptotic preserving Monte Carlo method (AP-DSMC) developed in [25] to the multispecies Boltzmann equation. This method is based on the successive penalty method [26] originated from the BGK-penalization-based AP scheme developed in [7]. For the multispecies Boltzmann equation, the penalizing Maxwellian should use the unified Maxwellian as suggested in [12]. We give the details of AP-DSMC for multispecies Boltzmann equation, show its AP property, and verify through several numerical examples that the scheme can allow time step much larger than the mean free time, thus making it much more efficient for flows with possibly small Knudsen numbers than the classical DSMC.
Simulating rotationally inelastic collisions using a direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Schullian, O.; Loreau, J.; Vaeck, N.; van der Avoird, A.; Heazlewood, B. R.; Rennick, C. J.; Softley, T. P.
2015-12-01
A new approach to simulating rotational cooling using a direct simulation Monte Carlo (DSMC) method is described and applied to the rotational cooling of ammonia seeded into a helium supersonic jet. The method makes use of ab initio rotational state changing cross sections calculated as a function of collision energy. Each particle in the DSMC simulations is labelled with a vector of rotational populations that evolves with time. Transfer of energy into translation is calculated from the mean energy transfer for this population at the specified collision energy. The simulations are compared with a continuum model for the on-axis density, temperature and velocity; rotational temperature as a function of distance from the nozzle is in accord with expectations from experimental measurements. The method could be applied to other types of gas mixture dynamics under non-uniform conditions, such as buffer gas cooling of NH3 by He.
Direct simulation Monte Carlo calculation of rarefied gas drag using an immersed boundary method
NASA Astrophysics Data System (ADS)
Jin, W.; Kleijn, C. R.; van Ommen, J. R.
2016-06-01
For simulating rarefied gas flows around a moving body, an immersed boundary method is presented here in conjunction with the Direct Simulation Monte Carlo (DSMC) method in order to allow the movement of a three dimensional immersed body on top of a fixed background grid. The simulated DSMC particles are reflected exactly at the landing points on the surface of the moving immersed body, while the effective cell volumes are taken into account for calculating the collisions between molecules. The effective cell volumes are computed by utilizing the Lagrangian intersecting points between the immersed boundary and the fixed background grid with a simple polyhedra regeneration algorithm. This method has been implemented in OpenFOAM and validated by computing the drag forces exerted on steady and moving spheres and comparing the results to that from conventional body-fitted mesh DSMC simulations and to analytical approximations.
NASA Astrophysics Data System (ADS)
Hull, Anthony B.; Ambruster, C.; Jewell, E.
2012-01-01
Simple Monte Carlo simulations can assist both the cultural astronomy researcher while the Research Design is developed and the eventual evaluators of research products. Following the method we describe allows assessment of the probability for there to be false positives associated with a site. Even seemingly evocative alignments may be meaningless, depending on the site characteristics and the number of degrees of freedom the researcher allows. In many cases, an observer may have to limit comments to "it is nice and it might be culturally meaningful, rather than saying "it is impressive so it must mean something". We describe a basic language with an associated set of attributes to be cataloged. These can be used to set up simple Monte Carlo simulations for a site. Without collaborating cultural evidence, or trends with similar attributes (for example a number of sites showing the same anticipatory date), the Monte Carlo simulation can be used as a filter to establish the likeliness that the observed alignment phenomena is the result of random factors. Such analysis may temper any eagerness to prematurely attribute cultural meaning to an observation. For the most complete description of an archaeological site, we urge researchers to capture the site attributes in a manner which permits statistical analysis. We also encourage cultural astronomers to record that which does not work, and that which may seem to align, but has no discernable meaning. Properly reporting situational information as tenets of the research design will reduce the subjective nature of archaeoastronomical interpretation. Examples from field work will be discussed.
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less
Monte Carlo method for predicting of cardiac toxicity: hERG blocker compounds.
Gobbi, Marco; Beeg, Marten; Toropova, Mariya A; Toropov, Andrey A; Salmona, Mario
2016-05-27
The estimation of the cardiotoxicity of compounds is an important task for the drug discovery as well as for the risk assessment in ecological aspect. The experimental estimation of the above endpoint is complex and expensive. Hence, the theoretical computational methods are very attractive alternative of the direct experiment. A model for cardiac toxicity of 400 hERG blocker compounds (pIC50) is built up using the Monte Carlo method. Three different splits into the visible training set (in fact, the training set plus the calibration set) and invisible validation sets examined. The predictive potential is very good for all examined splits. The statistical characteristics for the external validation set are (i) the coefficient of determination r(2)=(0.90-0.93); and (ii) root-mean squared error s=(0.30-0.40). PMID:27067105
Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings.
Sadeghi, K; Gauthier, J L; Field, G D; Greschner, M; Agne, M; Chichilnisky, E J; Paninski, L
2013-01-01
It has recently become possible to identify cone photoreceptors in primate retina from multi-electrode recordings of ganglion cell spiking driven by visual stimuli of sufficiently high spatial resolution. In this paper we present a statistical approach to the problem of identifying the number, locations, and color types of the cones observed in this type of experiment. We develop an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using a Linear-Nonlinear-Poisson (LNP) encoding model of ganglion cell spiking output, while analytically integrating out the functional weights between cones and ganglion cells. This method provides information about our posterior certainty about the inferred cone properties, and additionally leads to improvements in both the speed and quality of the inferred cone maps, compared to earlier "greedy" computational approaches. PMID:23194406
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
NASA Astrophysics Data System (ADS)
Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa
We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.
Recent developments in auxiliary-field quantum Monte Carlo methods for cold atoms
NASA Astrophysics Data System (ADS)
Shi, Hao; Rosenberg, Peter; Vitali, Ettore; Chiesa, Simone; Zhang, Shiwei
Exact calculations are performed on the two-dimensional strongly interacting, unpolarized, uniform Fermi gas with a zero-range attractive interaction. We describe recent advances in auxiliary-field quantum Monte Carlo techniques, which eliminate an infinite variance problem in the standard algorithm, and improve both acceptance ratio and efficiency. The new methods enable calculations on large enough lattices to reliably compute ground-state properties in the thermodynamic limit. An equation of state is obtained, with a parametrization provided, which can serve as a benchmark and allow accurate comparisons with experiments. The pressure, contact parameter, condensate fraction, and pairing gap will be presented. The same methods are also applied to obtain exact results on the two-dimensional strongly interacting Fermi gas in the presence of Rashba spin-orbit (SOC), providing insights on the interplay between pairing and SOC. Supported by NSF, DOE, and the Simons Foundation.
Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Boyd, Iain D.
1991-01-01
A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.
The Linked Neighbour List (LNL) method for fast off-lattice Monte Carlo simulations of fluids
NASA Astrophysics Data System (ADS)
Mazzeo, M. D.; Ricci, M.; Zannoni, C.
2010-03-01
We present a new algorithm, called linked neighbour list (LNL), useful to substantially speed up off-lattice Monte Carlo simulations of fluids by avoiding the computation of the molecular energy before every attempted move. We introduce a few variants of the LNL method targeted to minimise memory footprint or augment memory coherence and cache utilisation. Additionally, we present a few algorithms which drastically accelerate neighbour finding. We test our methods on the simulation of a dense off-lattice Gay-Berne fluid subjected to periodic boundary conditions observing a speedup factor of about 2.5 with respect to a well-coded implementation based on a conventional link-cell. We provide several implementation details of the different key data structures and algorithms used in this work.
A Variable Coefficient Method for Accurate Monte Carlo Simulation of Dynamic Asset Price
NASA Astrophysics Data System (ADS)
Li, Yiming; Hung, Chih-Young; Yu, Shao-Ming; Chiang, Su-Yun; Chiang, Yi-Hui; Cheng, Hui-Wen
2007-07-01
In this work, we propose an adaptive Monte Carlo (MC) simulation technique to compute the sample paths for the dynamical asset price. In contrast to conventional MC simulation with constant drift and volatility (μ,σ), our MC simulation is performed with variable coefficient methods for (μ,σ) in the solution scheme, where the explored dynamic asset pricing model starts from the formulation of geometric Brownian motion. With the method of simultaneously updated (μ,σ), more than 5,000 runs of MC simulation are performed to fulfills basic accuracy of the large-scale computation and suppresses statistical variance. Daily changes of stock market index in Taiwan and Japan are investigated and analyzed.
Gong, Xingchu; Li, Yao; Chen, Huali; Qu, Haibin
2015-01-01
A design space approach was applied to optimize the extraction process of Danhong injection. Dry matter yield and the yields of five active ingredients were selected as process critical quality attributes (CQAs). Extraction number, extraction time, and the mass ratio of water and material (W/M ratio) were selected as critical process parameters (CPPs). Quadratic models between CPPs and CQAs were developed with determination coefficients higher than 0.94. Active ingredient yields and dry matter yield increased as the extraction number increased. Monte-Carlo simulation with models established using a stepwise regression method was applied to calculate the probability-based design space. Step length showed little effect on the calculation results. Higher simulation number led to results with lower dispersion. Data generated in a Monte Carlo simulation following a normal distribution led to a design space with a smaller size. An optimized calculation condition was obtained with 10,000 simulation times, 0.01 calculation step length, a significance level value of 0.35 for adding or removing terms in a stepwise regression, and a normal distribution for data generation. The design space with a probability higher than 0.95 to attain the CQA criteria was calculated and verified successfully. Normal operating ranges of 8.2-10 g/g of W/M ratio, 1.25-1.63 h of extraction time, and two extractions were recommended. The optimized calculation conditions can conveniently be used in design space development for other pharmaceutical processes. PMID:26020778
Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.
2012-07-01
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)
Applications of the Monte Carlo method in nuclear physics using the GEANT4 toolkit
Moralles, Mauricio; Guimaraes, Carla C.; Menezes, Mario O.; Bonifacio, Daniel A. B.; Okuno, Emico; Guimaraes, Valdir; Murata, Helio M.; Bottaro, Marcio
2009-06-03
The capabilities of the personal computers allow the application of Monte Carlo methods to simulate very complex problems that involve the transport of particles through matter. Among the several codes commonly employed in nuclear physics problems, the GEANT4 has received great attention in the last years, mainly due to its flexibility and possibility to be improved by the users. Differently from other Monte Carlo codes, GEANT4 is a toolkit written in object oriented language (C++) that includes the mathematical engine of several physical processes, which are suitable to be employed in the transport of practically all types of particles and heavy ions. GEANT4 has also several tools to define materials, geometry, sources of radiation, beams of particles, electromagnetic fields, and graphical visualization of the experimental setup. After a brief description of the GEANT4 toolkit, this presentation reports investigations carried out by our group that involve simulations in the areas of dosimetry, nuclear instrumentation and medical physics. The physical processes available for photons, electrons, positrons and heavy ions were used in these simulations.
Applications of the Monte Carlo method in nuclear physics using the GEANT4 toolkit
NASA Astrophysics Data System (ADS)
Moralles, Maurício; Guimarães, Carla C.; Bonifácio, Daniel A. B.; Okuno, Emico; Murata, Hélio M.; Bottaro, Márcio; Menezes, Mário O.; Guimarães, Valdir
2009-06-01
The capabilities of the personal computers allow the application of Monte Carlo methods to simulate very complex problems that involve the transport of particles through matter. Among the several codes commonly employed in nuclear physics problems, the GEANT4 has received great attention in the last years, mainly due to its flexibility and possibility to be improved by the users. Differently from other Monte Carlo codes, GEANT4 is a toolkit written in object oriented language (C++) that includes the mathematical engine of several physical processes, which are suitable to be employed in the transport of practically all types of particles and heavy ions. GEANT4 has also several tools to define materials, geometry, sources of radiation, beams of particles, electromagnetic fields, and graphical visualization of the experimental setup. After a brief description of the GEANT4 toolkit, this presentation reports investigations carried out by our group that involve simulations in the areas of dosimetry, nuclear instrumentation and medical physics. The physical processes available for photons, electrons, positrons and heavy ions were used in these simulations.
Coherent-wave Monte Carlo method for simulating light propagation in tissue
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Pluciński, Jerzy
2016-03-01
Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.
Comparison of Monte Carlo collimator transport methods for photon treatment planning in radiotherapy
Schmidhalter, D.; Manser, P.; Frei, D.; Volken, W.; Fix, M. K.
2010-02-15
Purpose: The aim of this work was a Monte Carlo (MC) based investigation of the impact of different radiation transport methods in collimators of a linear accelerator on photon beam characteristics, dose distributions, and efficiency. Thereby it is investigated if it is possible to use different simplifications in the radiation transport for some clinical situations in order to save calculation time. Methods: Within the Swiss Monte Carlo Plan, a GUI-based framework for photon MC treatment planning, different MC methods are available for the radiation transport through the collimators [secondary jaws and multileaf collimator (MLC)]: EGSnrc (reference), VMC++, and Pin (an in-house developed MC code). Additional nonfull transport methods were implemented in order to provide different complexity levels for the MC simulation: Considering collimator attenuation only, considering Compton scatter only or just the firstCompton process, and considering the collimators as totally absorbing. Furthermore, either a simple or an exact geometry of the collimators can be selected for the absorbing or attenuation method. Phasespaces directly above and dose distributions in a water phantom are analyzed for academic and clinical treatment fields using 6 and 15 MV beams, including intensity modulated radiation therapy with dynamic MLC. Results: For all MC transport methods, differences in the radial mean energy and radial energy fluence are within 1% inside the geometric field. Below the collimators, the energy fluence is underestimated for nonfull MC transport methods ranging from 5% for Compton to 100% for Absorbing. Gamma analysis using EGSnrc calculated doses as reference shows that the percentage of voxels fulfilling a 1% /1 mm criterion is at least 98% when using VMC++, Compton, or firstCompton transport methods. When using the methods Pin, Transmission, Flat-Transmission, Flat-Absorbing or Absorbing, the mean value of points fulfilling this criterion over all tested cases is 97
A modular method to handle multiple time-dependent quantities in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Shin, J.; Perl, J.; Schümann, J.; Paganetti, H.; Faddegon, B. A.
2012-06-01
A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method.
A modular method to handle multiple time-dependent quantities in Monte Carlo simulations.
Shin, J; Perl, J; Schümann, J; Paganetti, H; Faddegon, B A
2012-06-01
A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call 'Time Features'. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method. PMID:22572201
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
Finite-Temperature Variational Monte Carlo Method for Strongly Correlated Electron Systems
NASA Astrophysics Data System (ADS)
Takai, Kensaku; Ido, Kota; Misawa, Takahiro; Yamaji, Youhei; Imada, Masatoshi
2016-03-01
A new computational method for finite-temperature properties of strongly correlated electrons is proposed by extending the variational Monte Carlo method originally developed for the ground state. The method is based on the path integral in the imaginary-time formulation, starting from the infinite-temperature state that is well approximated by a small number of certain random initial states. Lower temperatures are progressively reached by the imaginary-time evolution. The algorithm follows the framework of the quantum transfer matrix and finite-temperature Lanczos methods, but we extend them to treat much larger system sizes without the negative sign problem by optimizing the truncated Hilbert space on the basis of the time-dependent variational principle (TDVP). This optimization algorithm is equivalent to the stochastic reconfiguration (SR) method that has been frequently used for the ground state to optimally truncate the Hilbert space. The obtained finite-temperature states allow an interpretation based on the thermal pure quantum (TPQ) state instead of the conventional canonical-ensemble average. Our method is tested for the one- and two-dimensional Hubbard models and its accuracy and efficiency are demonstrated.
NASA Technical Reports Server (NTRS)
Haviland, J. K.
1974-01-01
The results are reported of two unrelated studies. The first was an investigation of the formulation of the equations for non-uniform unsteady flows, by perturbation of an irrotational flow to obtain the linear Green's equation. The resulting integral equation was found to contain a kernel which could be expressed as the solution of the adjoint flow equation, a linear equation for small perturbations, but with non-constant coefficients determined by the steady flow conditions. It is believed that the non-uniform flow effects may prove important in transonic flutter, and that in such cases, the use of doublet type solutions of the wave equation would then prove to be erroneous. The second task covered an initial investigation into the use of the Monte Carlo method for solution of acoustical field problems. Computed results are given for a rectangular room problem, and for a problem involving a circular duct with a source located at the closed end.
Markov Chain Monte Carlo Sampling Methods for 1D Seismic and EM Data Inversion
Energy Science and Technology Software Center (ESTSC)
2008-09-22
This software provides several Markov chain Monte Carlo sampling methods for the Bayesian model developed for inverting 1D marine seismic and controlled source electromagnetic (CSEM) data. The current software can be used for individual inversion of seismic AVO and CSEM data and for joint inversion of both seismic and EM data sets. The structure of the software is very general and flexible, and it allows users to incorporate their own forward simulation codes and rockmore » physics model codes easily into this software. Although the softwae was developed using C and C++ computer languages, the user-supplied codes can be written in C, C++, or various versions of Fortran languages. The software provides clear interfaces for users to plug in their own codes. The output of this software is in the format that the R free software CODA can directly read to build MCMC objects.« less
Monte Carlo methods for neutrino transport in type-II supernovae
NASA Astrophysics Data System (ADS)
Janka, Hans-Thomas
Neutrinos play an important role in the type-II supernova scenario. Numerous approaches have been made in order to treat the generation and transport of neutrinos and the interactions between neutrinos and matter during stellar collapse and the shock propagation phase. However, all computationally fast methods have in common the fact that they cannot avoid simplifications in describing the interactions and, furthermore, have to use parameterizations in handling the Boltzmann transport equation. In order to provide an instrument for calibrating these treatments and for calculating neutrino spectra emitted from given stellar configurations, a Monte Carlo transport code was designed. Special attention was paid to an accurate computation of scattering kernels and source functions. Neutrino spectra for a hydrostatic stage of a 20 solar mass supernova simulation were generated and conclusions drawn concerning a late time revival of the stalled shock by neutrino heating.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, S. R.; Wilson, P. P. H.; Evans, T. M.
2013-07-01
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)
Monte Carlo study of living polymers with the bond-fluctuation method
NASA Astrophysics Data System (ADS)
Rouault, Yannick; Milchev, Andrey
1995-06-01
The highly efficient bond-fluctuation method for Monte Carlo simulations of both static and dynamic properties of polymers is applied to a system of living polymers. Parallel to stochastic movements of monomers, which result in Rouse dynamics of the macromolecules, the polymer chains break, or associate at chain ends with other chains and single monomers, in the process of equilibrium polymerization. We study the changes in equilibrium properties, such as molecular-weight distribution, average chain length, and radius of gyration, and specific heat with varying density and temperature of the system. The results of our numeric experiments indicate a very good agreement with the recently suggested description in terms of the mean-field approximation. The coincidence of the specific heat maximum position at kBT=V/4 in both theory and simulation suggests the use of calorimetric measurements for the determination of the scission-recombination energy V in real experiments.
A Monte Carlo Method for Projecting Uncertainty in 2D Lagrangian Trajectories
NASA Astrophysics Data System (ADS)
Robel, A.; Lozier, S.; Gary, S. F.
2009-12-01
In this study, a novel method is proposed for modeling the propagation of uncertainty due to subgrid-scale processes through a Lagrangian trajectory advected by ocean surface velocities. The primary motivation and application is differentiating between active and passive trajectories for sea turtles as observed through satellite telemetry. A spatiotemporal launch box is centered on the time and place of actual launch and populated with launch points. Synthetic drifters are launched at each of these locations, adding, at each time step along the trajectory, Monte Carlo perturbations in velocity scaled to the natural variability of the velocity field. The resulting trajectory cloud provides a dynamically evolving density field of synthetic drifter locations that represent the projection of subgrid-scale uncertainty out in time. Subsequently, by relaunching synthetic drifters at points along the trajectory, plots are generated in a daisy chain configuration of the “most likely passive pathways” for the drifter.
Electronic correlation effects in a fullerene molecule studied by the variational Monte Carlo method
Krivnov, V.Y. ); Shamovsky, I.L. Chemistry Department, University of West Indies Mona Campus, St. Andrew, Kingston 7 ); Tornau, E.E. ); Rosengren, A. )
1994-10-15
Electron-correlation effects in the fullerene molecule and its ions are investigated in the framework of the Hubbard model. The variational Monte Carlo method and the Gutzwiller wave function are used. Most attention is paid to the case of intermediate interactions, but also the strong coupling limit, where the Hubbard model reduces to the antiferromagnetic Heisenberg model, is considered for the fullerene molecule. In this case we obtain a very low variational ground state energy. Futher, we have calculated the main spin correlation functions in the ground state. Only short-range order is found. The pairing energy of two electrons added to a fullerene molecule or to a fullerene ion is also calculated. Contrary to the results obtained by second-order perturbation theory, pair binding is not found.
NASA Astrophysics Data System (ADS)
King, J. A.; Mortlock, D. J.; Webb, J. K.; Murphy, M. T.
Recent attempts to constrain cosmological variation in the fine structure constant, alpha , using quasar absorption lines have yielded two statistical samples which initially appear to be inconsistent. One of these samples was subsequently demonstrated to not pass consistency tests; it appears that the optimisation algorithm used to fit the model to the spectra failed. Nevertheless, the results of the other hinge on the robustness of the spectral fitting program VPFIT, which has been tested through simulation but not through direct exploration of the likelihood function. We present the application of Markov Chain Monte Carlo (MCMC) methods to this problem, and demonstrate that VPFIT produces similar values and uncertainties for Delta alpha /alpha , the fractional change in the fine structure constant, as our MCMC algorithm, and thus that VPFIT is reliable.
NASA Astrophysics Data System (ADS)
King, Julian; Mortlock, Daniel; Webb, John; Murphy, Michael
2010-11-01
Recent attempts to constrain cosmological variation in the fine structure constant, α, using quasar absorption lines have yielded two statistical samples which initially appear to be inconsistent. One of these samples was subsequently demonstrated to not pass consistency tests; it appears that the optimisation algorithm used to fit the model to the spectra failed. Nevertheless, the results of the other hinge on the robustness of the spectral fitting program VPFIT, which has been tested through simulation but not through direct exploration of the likelihood function. We present the application of Markov Chain Monte Carlo (MCMC) methods to this problem, and demonstrate that VPFIT produces similar values and uncertainties for Δα/α, the fractional change in the fine structure constant, as our MCMC algorithm, and thus that VPFIT is reliable.
The Calculation of Thermal Conductivities by Three Dimensional Direct Simulation Monte Carlo Method.
Zhao, Xin-Peng; Li, Zeng-Yao; Liu, He; Tao, Wen-Quan
2015-04-01
Three dimensional direct simulation Monte Carlo (DSMC) method with the variable soft sphere (VSS) collision model is implemented to solve the Boltzmann equation and to acquire the heat flux between two parallel plates (Fourier Flow). The gaseous thermal conductivity of nitrogen is derived based on the Fourier's law under local equilibrium condition at temperature from 270 to 1800 K and pressure from 0.5 to 100,000 Pa and compared with the experimental data and Eucken relation from Chapman and Enskog (CE) theory. It is concluded that the present results are consistent with the experimental data but much higher than those by Eucken relation especially at high temperature. The contribution of internal energy of molecule to the gaseous thermal conductivity becomes significant as increasing the temperature. PMID:26353582
An analysis of the convergence of the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Galitzine, Cyril; Boyd, Iain D.
2015-05-01
In this article, a rigorous framework for the analysis of the convergence of the direct simulation Monte Carlo (DSMC) method is presented. It is applied to the simulation of two test cases: an axisymmetric jet at a Knudsen number of 0.01 and Mach number of 1 and a two-dimensional cylinder flow at a Knudsen of 0.05 and Mach 10. The rate of convergence of sampled quantities is found to be well predicted by an extended form of the Central Limit Theorem that takes into account the correlation of samples but requires the calculation of correlation spectra. A simplified analytical model that does not require correlation spectra is then constructed to model the effect of sample correlation. It is then used to obtain an a priori estimate of the convergence error.
Pozzi, Sara A; Downar, Thomas J; Padovani, Enrico; Clarke, Shaun D
2006-01-01
This work illustrates a methodology based on photon interrogation and coincidence counting for determining the characteristics of fissile material. The feasibility of the proposed methods was demonstrated using a Monte Carlo code system to simulate the full statistics of the neutron and photon field generated by the photon interrogation of fissile and non-fissile materials. Time correlation functions between detectors were simulated for photon beam-on and photon beam-off operation. In the latter case, the correlation signal is obtained via delayed neutrons from photofission, which induce further fission chains in the nuclear material. An analysis methodology was demonstrated based on features selected from the simulated correlation functions and on the use of artificial neural networks. We show that the methodology can reliably differentiate between highly enriched uranium and plutonium. Furthermore, the mass of the material can be determined with a relative error of about 12%. Keywords: MCNP, MCNP-PoliMi, Artificial neural network, Correlation measurement, Photofission
An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks
NASA Technical Reports Server (NTRS)
Kim, Stacy
2011-01-01
Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.
Parallel domain decomposition methods in fluid models with Monte Carlo transport
Alme, H.J.; Rodrigues, G.H.; Zimmerman, G.B.
1996-12-01
To examine the domain decomposition code coupled Monte Carlo-finite element calculation, it is important to use a domain decomposition that is suitable for the individual models. We have developed a code that simulates a Monte Carlo calculation ( ) on a massively parallel processor. This code is used to examine the load balancing behavior of three domain decomposition ( ) for a Monte Carlo calculation. Results are presented.
Multi-Physics Markov Chain Monte Carlo Methods for Subsurface Flows
NASA Astrophysics Data System (ADS)
Rigelo, J.; Ginting, V.; Rahunanthan, A.; Pereira, F.
2014-12-01
For CO2 sequestration in deep saline aquifers, contaminant transport in subsurface, and oil or gas recovery, we often need to forecast flow patterns. Subsurface characterization is a critical and challenging step in flow forecasting. To characterize subsurface properties we establish a statistical description of the subsurface properties that are conditioned to existing dynamic and static data. A Markov Chain Monte Carlo (MCMC) algorithm is used in a Bayesian statistical description to reconstruct the spatial distribution of rock permeability and porosity. The MCMC algorithm requires repeatedly solving a set of nonlinear partial differential equations describing displacement of fluids in porous media for different values of permeability and porosity. The time needed for the generation of a reliable MCMC chain using the algorithm can be too long to be practical for flow forecasting. In this work we develop fast and effective computational methods for generating MCMC chains in the Bayesian framework for the subsurface characterization. Our strategy consists of constructing a family of computationally inexpensive preconditioners based on simpler physics as well as on surrogate models such that the number of fine-grid simulations is drastically reduced in the generated MCMC chains. In particular, we introduce a huff-puff technique as screening step in a three-stage multi-physics MCMC algorithm to reduce the number of expensive final stage simulations. The huff-puff technique in the algorithm enables a better characterization of subsurface near wells. We assess the quality of the proposed multi-physics MCMC methods by considering Monte Carlo simulations for forecasting oil production in an oil reservoir.
Application of a Monte Carlo method for modeling debris flow run-out
NASA Astrophysics Data System (ADS)
Luna, B. Quan; Cepeda, J.; Stumpf, A.; van Westen, C. J.; Malet, J. P.; van Asch, T. W. J.
2012-04-01
A probabilistic framework based on a Monte Carlo method for the modeling of debris flow hazards is presented. The framework is based on a dynamic model, which is combined with an explicit representation of the different parameter uncertainties. The probability distribution of these parameters is determined from an extensive collected database with information of back calibrated past events from different authors. The uncertainty in these inputs can be simulated and used to increase confidence in certain extreme run-out distances. In the Monte Carlo procedure; the input parameters of the numerical models simulating propagation and stoppage of debris flows are randomly selected. Model runs are performed using the randomly generated input values. This allows estimating the probability density function of the output variables characterizing the destructive power of debris flow (for instance depth, velocities and impact pressures) at any point along the path. To demonstrate the implementation of this method, a continuum two-dimensional dynamic simulation model that solves the conservation equations of mass and momentum was applied (MassMov2D). This general methodology facilitates the consistent combination of physical models with the available observations. The probabilistic model presented can be considered as a framework to accommodate any existing one or two dimensional dynamic model. The resulting probabilistic spatial model can serve as a basis for hazard mapping and spatial risk assessment. The outlined procedure provides a useful way for experts to produce hazard or risk maps for the typical case where historical records are either poorly documented or even completely lacking, as well as to derive confidence limits on the proposed zoning.
Armas-Pérez, Julio C; Hernández-Ortiz, Juan P; de Pablo, Juan J
2015-12-28
A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions. PMID:26723642
NASA Astrophysics Data System (ADS)
Armas-Pérez, Julio C.; Hernández-Ortiz, Juan P.; de Pablo, Juan J.
2015-12-01
A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions.
A deterministic alternative to the full configuration interaction quantum Monte Carlo method
NASA Astrophysics Data System (ADS)
Tubman, Norm M.; Lee, Joonho; Takeshita, Tyler Y.; Head-Gordon, Martin; Whaley, K. Birgitta
2016-07-01
Development of exponentially scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, is a useful algorithm that allows exact diagonalization through stochastically sampling determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, along with a stochastic projected wave function, to find the important parts of Hilbert space. However, the stochastic representation of the wave function is not required to search Hilbert space efficiently, and here we describe a highly efficient deterministic method that can achieve chemical accuracy for a wide range of systems, including the difficult Cr2 molecule. We demonstrate for systems like Cr2 that such calculations can be performed in just a few cpu hours which makes it one of the most efficient and accurate methods that can attain chemical accuracy for strongly correlated systems. In addition our method also allows efficient calculation of excited state energies, which we illustrate with benchmark results for the excited states of C2.
Solution of deterministic-stochastic epidemic models by dynamical Monte Carlo method
NASA Astrophysics Data System (ADS)
Aièllo, O. E.; Haas, V. J.; daSilva, M. A. A.; Caliri, A.
2000-07-01
This work is concerned with dynamical Monte Carlo (MC) method and its application to models originally formulated in a continuous-deterministic approach. Specifically, a susceptible-infected-removed-susceptible (SIRS) model is used in order to analyze aspects of the dynamical MC algorithm and achieve its applications in epidemic contexts. We first examine two known approaches to the dynamical interpretation of the MC method and follow with the application of one of them in the SIRS model. The working method chosen is based on the Poisson process where hierarchy of events, properly calculated waiting time between events, and independence of the events simulated, are the basic requirements. To verify the consistence of the method, some preliminary MC results are compared against exact steady-state solutions and other general numerical results (provided by Runge-Kutta method): good agreement is found. Finally, a space-dependent extension of the SIRS model is introduced and treated by MC. The results are interpreted under and in accordance with aspects of the herd-immunity concept.
A First-Passage Kinetic Monte Carlo method for reaction–drift–diffusion processes
Mauro, Ava J.; Sigurdsson, Jon Karl; Shrake, Justin; Atzberger, Paul J.; Isaacson, Samuel A.
2014-02-15
Stochastic reaction–diffusion models are now a popular tool for studying physical systems in which both the explicit diffusion of molecules and noise in the chemical reaction process play important roles. The Smoluchowski diffusion-limited reaction model (SDLR) is one of several that have been used to study biological systems. Exact realizations of the underlying stochastic processes described by the SDLR model can be generated by the recently proposed First-Passage Kinetic Monte Carlo (FPKMC) method. This exactness relies on sampling analytical solutions to one and two-body diffusion equations in simplified protective domains. In this work we extend the FPKMC to allow for drift arising from fixed, background potentials. As the corresponding Fokker–Planck equations that describe the motion of each molecule can no longer be solved analytically, we develop a hybrid method that discretizes the protective domains. The discretization is chosen so that the drift–diffusion of each molecule within its protective domain is approximated by a continuous-time random walk on a lattice. New lattices are defined dynamically as the protective domains are updated, hence we will refer to our method as Dynamic Lattice FPKMC or DL-FPKMC. We focus primarily on the one-dimensional case in this manuscript, and demonstrate the numerical convergence and accuracy of our method in this case for both smooth and discontinuous potentials. We also present applications of our method, which illustrate the impact of drift on reaction kinetics.
Zabihzadeh, Mansour; Ay, Mohammad Reza; Allahverdi, Mahmoud; Mesbahi, Asghar; Mahdavi, Seyed Rabee; Shahriari, Majid
2009-07-01
Despite all advantages associated with high-energy radiotherapy to improve therapeutic gain, the production of photoneutron via interaction of high-energy photons with high atomic number (Z) materials increases undesired dose to the patient and staff. Owing to the limitation and complication of experimental neutron dosimetry in mixed beam environment, including photon and neutron, the Monte Carlo (MC) simulation is a gold standard method for calculation of photoneutron contaminations. On the other hand, the complexity of treatment head makes the MC simulation more difficult and time-consuming. In this study, the possibility of using a simplified MC model for the simulation of treatment head has been investigated using MCNP4C general purpose MC code. As a part of comparative assessment strategy, the fluence, average energy and dose equivalent of photoneutrons were estimated and compared with other studies for several fields and energies at different points in treatment room and maze. The mean energy of photoneutrons was 0.17, 0.19 and 0.2 MeV at the patient plan for 10, 15 and 18 MeV, respectively. The calculated values differed, respectively, by a factor of 1.4, 0.7 and 0.61 compared with the reported measured data for 10, 15 and 18 MeV. Our simulation results in the maze showed that the neutron dose equivalent is attenuated by a factor of 10 for every 4.6 m of maze length while the related factor from Kersey analytical method is 5 m. The neutron dose equivalent was 4.1 mSv Gy(-1) at the isocentre and decreased to 0.79 mSv Gy(-1) at a distance of 100 cm away from the isocentre for 40 x 40 cm(2). There is good agreement between the data calculated using simplified model in this study and measurements. Considering the reported high uncertainties (up to 50%) in experimental neutron dosimetry, it can be concluded that the simplified model can be used as a useful tool for estimation of photoneutron contamination associated with high-energy photon radiotherapy. PMID
The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations
Sellier, J.M. Dimov, I.
2014-09-15
The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practically unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.
Uniform-acceptance force-bias Monte Carlo method with time scale to study solid-state diffusion
NASA Astrophysics Data System (ADS)
Mees, Maarten J.; Pourtois, Geoffrey; Neyts, Erik C.; Thijsse, Barend J.; Stesmans, André
2012-04-01
Monte Carlo (MC) methods have a long-standing history as partners of molecular dynamics (MD) to simulate the evolution of materials at the atomic scale. Among these techniques, the uniform-acceptance force-bias Monte Carlo (UFMC) method [G. Dereli, Mol. Simul.10.1080/08927029208022490 8, 351 (1992)] has recently attracted attention [M. Timonova , Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.81.144107 81, 144107 (2010)] thanks to its apparent capacity of being able to simulate physical processes in a reduced number of iterations compared to classical MD methods. The origin of this efficiency remains, however, unclear. In this work we derive a UFMC method starting from basic thermodynamic principles, which leads to an intuitive and unambiguous formalism. The approach includes a statistically relevant time step per Monte Carlo iteration, showing a significant speed-up compared to MD simulations. This time-stamped force-bias Monte Carlo (tfMC) formalism is tested on both simple one-dimensional and three-dimensional systems. Both test-cases give excellent results in agreement with analytical solutions and literature reports. The inclusion of a time scale, the simplicity of the method, and the enhancement of the time step compared to classical MD methods make this method very appealing for studying the dynamics of many-particle systems.
An energy transfer method for 4D Monte Carlo dose calculation
Siebers, Jeffrey V.; Zhong, Hualiang
2008-01-01
This article presents a new method for four-dimensional Monte Carlo dose calculations which properly addresses dose mapping for deforming anatomy. The method, called the energy transfer method (ETM), separates the particle transport and particle scoring geometries: Particle transport takes place in the typical rectilinear coordinate system of the source image, while energy deposition scoring takes place in a desired reference image via use of deformable image registration. Dose is the energy deposited per unit mass in the reference image. ETM has been implemented into DOSXYZnrc and compared with a conventional dose interpolation method (DIM) on deformable phantoms. For voxels whose contents merge in the deforming phantom, the doses calculated by ETM are exactly the same as an analytical solution, contrasting to the DIM which has an average 1.1% dose discrepancy in the beam direction with a maximum error of 24.9% found in the penumbra of a 6 MV beam. The DIM error observed persists even if voxel subdivision is used. The ETM is computationally efficient and will be useful for 4D dose addition and benchmarking alternative 4D dose addition algorithms. PMID:18841862
Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.
2008-06-01
An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.
Low-Density Nozzle Flow by the Direct Simulation Monte Carlo and Continuum Methods
NASA Technical Reports Server (NTRS)
Chung, Chang-Hong; Kim, Sku C.; Stubbs, Robert M.; Dewitt, Kenneth J.
1994-01-01
Two different approaches, the direct simulation Monte Carlo (DSMC) method based on molecular gasdynamics, and a finite-volume approximation of the Navier-Stokes equations, which are based on continuum gasdynamics, are employed in the analysis of a low-density gas flow in a small converging-diverging nozzle. The fluid experiences various kinds of flow regimes including continuum, slip, transition, and free-molecular. Results from the two numerical methods are compared with Rothe's experimental data, in which density and rotational temperature variations along the centerline and at various locations inside a low-density nozzle were measured by the electron-beam fluorescence technique. The continuum approach showed good agreement with the experimental data as far as density is concerned. The results from the DSMC method showed good agreement with the experimental data, both in the density and the rotational temperature. It is also shown that the simulation parameters, such as the gas/surface interaction model, the energy exchange model between rotational and translational modes, and the viscosity-temperature exponent, have substantial effects on the results of the DSMC method.
An energy transfer method for 4D Monte Carlo dose calculation.
Siebers, Jeffrey V; Zhong, Hualiang
2008-09-01
This article presents a new method for four-dimensional Monte Carlo dose calculations which properly addresses dose mapping for deforming anatomy. The method, called the energy transfer method (ETM), separates the particle transport and particle scoring geometries: Particle transport takes place in the typical rectilinear coordinate system of the source image, while energy deposition scoring takes place in a desired reference image via use of deformable image registration. Dose is the energy deposited per unit mass in the reference image. ETM has been implemented into DOSXYZnrc and compared with a conventional dose interpolation method (DIM) on deformable phantoms. For voxels whose contents merge in the deforming phantom, the doses calculated by ETM are exactly the same as an analytical solution, contrasting to the DIM which has an average 1.1% dose discrepancy in the beam direction with a maximum error of 24.9% found in the penumbra of a 6 MV beam. The DIM error observed persists even if voxel subdivision is used. The ETM is computationally efficient and will be useful for 4D dose addition and benchmarking alternative 4D dose addition algorithms. PMID:18841862
Uncertainty Quantification of Prompt Fission Neutron Spectra Using the Unified Monte Carlo Method
NASA Astrophysics Data System (ADS)
Rising, M. E.; Talou, P.; Prinja, A. K.
2014-04-01
In the ENDF/B-VII.1 nuclear data library, the existing covariance evaluations of the prompt fission neutron spectra (PFNS) were computed by combining the available experimental differential data with theoretical model calculations, relying on the use of a first-order linear Bayesan approach, the Kalman filter. This approach assumes that the theoretical model response to changes in input model parameters be linear about the a priori central values. While the Unified Monte Carlo (UMC) method remains a Bayesian approach, like the Kalman filter, this method does not make any assumption about the linearity of the model response or shape of the a posteriori distribution of the parameters. By sampling from a distribution centered about the a priori model parameters, the UMC method computes the moments of the a posteriori parameter distribution. As the number of samples increases, the statistical noise in the computed a posteriori moments decrease and an appropriately converged solution corresponding to the true mean of the a posteriori PDF results. The UMC method has been successfully implemented using both a uniform and Gaussian sampling distribution and has been used for the evaluation of the PFNS and its associated uncertainties. While many of the UMC results are similar to the first-order Kalman filter results, significant differences are shown when experimental data are excluded from the evaluation process. When experimental data are included a few small nonlinearities are present in the high outgoing energy tail of the PFNS.
Implementation of the probability table method in a continuous-energy Monte Carlo code system
Sutton, T.M.; Brown, F.B.
1998-10-01
RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.
Hybrid Monte Carlo-Diffusion Method For Light Propagation in Tissue With a Low-Scattering Region
NASA Astrophysics Data System (ADS)
Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji
2003-06-01
The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.
Hybrid Monte Carlo-diffusion method for light propagation in tissue with a low-scattering region.
Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji
2003-06-01
The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method. PMID:12790437
Zou Yu; Kavousanakis, Michail E.; Kevrekidis, Ioannis G.; Fox, Rodney O.
2010-07-20
The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations by exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.
Bishop, Joseph E.; Strack, O. E.
2011-03-22
A novel method is presented for assessing the convergence of a sequence of statistical distributions generated by direct Monte Carlo sampling. The primary application is to assess the mesh or grid convergence, and possibly divergence, of stochastic outputs from non-linear continuum systems. Example systems include those from fluid or solid mechanics, particularly those with instabilities and sensitive dependence on initial conditions or system parameters. The convergence assessment is based on demonstrating empirically that a sequence of cumulative distribution functions converges in the Linfty norm. The effect of finite sample sizes is quantified using confidence levels from the Kolmogorov–Smirnov statistic. The statistical method is independent of the underlying distributions. The statistical method is demonstrated using two examples: (1) the logistic map in the chaotic regime, and (2) a fragmenting ductile ring modeled with an explicit-dynamics finite element code. In the fragmenting ring example the convergence of the distribution describing neck spacing is investigated. The initial yield strength is treated as a random field. Two different random fields are considered, one with spatial correlation and the other without. Both cases converged, albeit to different distributions. The case with spatial correlation exhibited a significantly higher convergence rate compared with the one without spatial correlation.
Calculation of the entropy of random coil polymers with the hypothetical scanning Monte Carlo method
NASA Astrophysics Data System (ADS)
White, Ronald P.; Meirovitch, Hagai
2005-12-01
Hypothetical scanning Monte Carlo (HSMC) is a method for calculating the absolute entropy S and free energy F from a given MC trajectory developed recently and applied to liquid argon, TIP3P water, and peptides. In this paper HSMC is extended to random coil polymers by applying it to self-avoiding walks on a square lattice—a simple but difficult model due to strong excluded volume interactions. With HSMC the probability of a given chain is obtained as a product of transition probabilities calculated for each bond by MC simulations and a counting formula. This probability is exact in the sense that it is based on all the interactions of the system and the only approximation is due to finite sampling. The method provides rigorous upper and lower bounds for F, which can be obtained from a very small sample and even from a single chain conformation. HSMC is independent of existing techniques and thus constitutes an independent research tool. The HSMC results are compared to those obtained by other methods, and its application to complex lattice chain models is discussed; we emphasize its ability to treat any type of boundary conditions for which a reference state (with known free energy) might be difficult to define for a thermodynamic integration process. Finally, we stress that the capability of HSMC to extract the absolute entropy from a given sample is important for studying relaxation processes, such as protein folding.
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Method using a Monte Carlo simulation database for optimizing x-ray fluoroscopic conditions
NASA Astrophysics Data System (ADS)
Ueki, Hironori; Okajima, Kenichi
2001-06-01
To improve image quality (IQ) and reduce dose in x-ray fluoroscopy, we have developed a new method for optimizing x-ray conditions such as x-ray tube voltage, tube current, and gain of the detector. This method uses a Monte Carlo (MC)-simulation database for analyzing the relations between IQ, x-ray dose, and x-ray conditions. The optimization consists of three steps. First, a permissible dose limit for each object thickness is preset. Then, the MC database is used to calculate the IQ of x-ray projections under all the available conditions that satisfy this presetting. Finally, the optimum conditions are determined as the ones that provide the highest IQ. The MC database contains projections of an estimation phantom simulated under emissions of single-energy photons with various energies. By composing these single-energy projections according to the bremsstrahlung energy distributions, the IQs under any x-ray conditions can be calculated in a very short time. These calculations show that the optimum conditions are determined by the relation between quantum noise and scattering. Moreover, the heat-capacity limit of the x-ray tube can also determine the optimum conditions. It is concluded that the developed optimization method can reduce the time and cost of designing x-ray fluoroscopic systems.
Testing planetary transit detection methods with grid-based Monte-Carlo simulations.
NASA Astrophysics Data System (ADS)
Bonomo, A. S.; Lanza, A. F.
The detection of extrasolar planets by means of the transit method is a rapidly growing field of modern astrophysics. The periodic light dips produced by the passage of a planet in front of its parent star can be used to reveal the presence of the planet itself, to measure its orbital period and relative radius, as well as to perform studies on the outer layers of the planet by analysing the light of the star passing through the planet's atmosphere. We have developed a new method to detect transits of Earth-sized planets in front of solar-like stars that allows us to reduce the impact of stellar microvariability on transit detection. A large Monte Carlo numerical experiment has been designed to test the performance of our approach in comparison with other transit detection methods for stars of different magnitudes and planets of different radius and orbital period, as will be observed by the space experiments CoRoT and Kepler. The large computational load of this experiment has been managed by means of the Grid infrastructure of the COMETA consortium.
White, Ronald P; Meirovitch, Hagai
2005-12-01
Hypothetical scanning Monte Carlo (HSMC) is a method for calculating the absolute entropy S and free energy F from a given MC trajectory developed recently and applied to liquid argon, TIP3P water, and peptides. In this paper HSMC is extended to random coil polymers by applying it to self-avoiding walks on a square lattice--a simple but difficult model due to strong excluded volume interactions. With HSMC the probability of a given chain is obtained as a product of transition probabilities calculated for each bond by MC simulations and a counting formula. This probability is exact in the sense that it is based on all the interactions of the system and the only approximation is due to finite sampling. The method provides rigorous upper and lower bounds for F, which can be obtained from a very small sample and even from a single chain conformation. HSMC is independent of existing techniques and thus constitutes an independent research tool. The HSMC results are compared to those obtained by other methods, and its application to complex lattice chain models is discussed; we emphasize its ability to treat any type of boundary conditions for which a reference state (with known free energy) might be difficult to define for a thermodynamic integration process. Finally, we stress that the capability of HSMC to extract the absolute entropy from a given sample is important for studying relaxation processes, such as protein folding. PMID:16356071
THE EURADOS-KIT TRAINING COURSE ON MONTE CARLO METHODS FOR THE CALIBRATION OF BODY COUNTERS.
Breustedt, B; Broggio, D; Gomez-Ros, J M; Leone, D; Marzocchi, O; Poelz, S; Shutt, A; Lopez, M A
2016-09-01
Monte Carlo (MC) methods are numerical simulation techniques that can be used to extend the scope of calibrations performed in in vivo monitoring laboratories. These methods allow calibrations to be carried out for a much wider range of body shapes and sizes than would be feasible using physical phantoms. Unfortunately, nowadays, this powerful technique is still used mainly in research institutions only. In 2013, EURADOS and the in vivo monitoring laboratory of Karlsruhe Institute of Technology (KIT) organized a 3-d training course to disseminate knowledge on the application of MC methods for in vivo monitoring. It was intended as a hands-on course centered around an exercise which guided the participants step by step through the calibration process using a simplified version of KIT's equipment. Only introductory lectures on in vivo monitoring and voxel models were given. The course was based on MC codes of the MCNP family, widespread in the community. The strong involvement of the participants and the working atmosphere in the classroom as well as the formal evaluation of the course showed that the approach chosen was appropriate. Participants liked the hands-on approach and the extensive course materials on the exercise. PMID:27103642
NASA Astrophysics Data System (ADS)
Farah, J.; Martinetti, F.; Sayah, R.; Lacoste, V.; Donadille, L.; Trompier, F.; Nauraye, C.; De Marzi, L.; Vabre, I.; Delacroix, S.; Hérault, J.; Clairand, I.
2014-06-01
Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.
Farah, J; Martinetti, F; Sayah, R; Lacoste, V; Donadille, L; Trompier, F; Nauraye, C; De Marzi, L; Vabre, I; Delacroix, S; Hérault, J; Clairand, I
2014-06-01
Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications. PMID:24800943
Systematic hierarchical coarse-graining with the inverse Monte Carlo method.
Lyubartsev, Alexander P; Naômé, Aymeric; Vercauteren, Daniel P; Laaksonen, Aatto
2015-12-28
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile. PMID:26723605
Bednarz, Bryan; Lu, Hsiao-Ming; Engelsman, Martijn; Paganetti, Harald
2011-01-01
Monte Carlo models of proton therapy treatment heads are being used to improve beam delivery systems and to calculate the radiation field for patient dose calculations. The achievable accuracy of the model depends on the exact knowledge of the treatment head geometry and time structure, the material characteristics, and the underlying physics. This work aimed at studying the uncertainties in treatment head simulations for passive scattering proton therapy. The sensitivities of spread-out Bragg peak (SOBP) dose distributions on material densities, mean ionization potentials, initial proton beam energy spread and spot size were investigated. An improved understanding of the nature of these parameters may help to improve agreement between calculated and measured SOBP dose distributions and to ensure that the range, modulation width, and uniformity are within clinical tolerance levels. Furthermore, we present a method to make small corrections to the uniformity of spread-out Bragg peaks by utilizing the time structure of the beam delivery. In addition, we re-commissioned the models of the two proton treatment heads located at our facility using the aforementioned correction methods presented in this paper. PMID:21478569
NASA Astrophysics Data System (ADS)
Ido, Kota; Ohgoe, Takahiro; Imada, Masatoshi
2015-12-01
We develop a time-dependent variational Monte Carlo (t-VMC) method for quantum dynamics of strongly correlated electrons. The t-VMC method has been recently applied to bosonic systems and quantum spin systems. Here we propose a time-dependent trial wave function with many variational parameters, which is suitable for nonequilibrium strongly correlated electron systems. As the trial state, we adopt the generalized pair-product wave function with correlation factors and quantum-number projections. This trial wave function has been proven to accurately describe ground states of strongly correlated electron systems. To show the accuracy and efficiency of our trial wave function in nonequilibrium states as well, we present our benchmark results for relaxation dynamics during and after interaction quench protocols of fermionic Hubbard models. We find that our trial wave function well reproduces the exact results for the time evolution of physical quantities such as energy, momentum distribution, spin structure factor, and superconducting correlations. These results show that the t-VMC with our trial wave function offers an efficient and accurate way to study challenging problems of nonequilibrium dynamics in strongly correlated electron systems.
Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods
Godfrey, Andrew T; Gehin, Jess C; Bekar, Kursat B; Celik, Cihangir
2014-01-01
The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.
Adjoint-based deviational Monte Carlo methods for phonon transport calculations
NASA Astrophysics Data System (ADS)
Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.
2015-06-01
In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
NASA Astrophysics Data System (ADS)
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-01
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
Extracting g tensor values from experimental data with Markov Chain Monte Carlo methods
NASA Astrophysics Data System (ADS)
Kulkarni, Anagha; Liu, Weiwen; Zurakowski, Ryan; Doty, Matthew
Quantum Dot Molecules(QDMs) have emerged as a new platform for optoelectronic and spintronic devices.QDMs consist of multiple Quantum Dots (QDs) arranged in close proximity such that interactions between them can tailor their optical and spin properties.These properties can be tuned during growth and in-situ by applying electric fields that vary the coupling between QDs,which controls the formation of delocalized molecular-like states.Engineering the formation of molecular states in QDMS can be used to achieve new functionalities unavailable with individual QDs. Using molecular engineering approaches to tailor QDMs require precise knowledge of parameters such as binding energies of charge complexes,magnitude of many body interactions or components of the g tensor.Precise values of these parameters are difficult to extract from either experimental measurements or theoretical calculations.We develop and demonstrate a Markov Chain Monte Carlo method for extracting elements of the g tensor for a single hole confined in a QDM from photoluminescence data obtained as a function of electric and magnetic fields.This method can be applied to extract precise quantitative values of other physical parameters from sparse experimental data on a variety of systems.
Cu-Au Alloys Using Monte Carlo Simulations and the BFS Method for Alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Good, Brian; Ferrante, John
1996-01-01
Semi empirical methods have shown considerable promise in aiding in the calculation of many properties of materials. Materials used in engineering applications have defects that occur for various reasons including processing. In this work we present the first application of the BFS method for alloys to describe some aspects of microstructure due to processing for the Cu-Au system (Cu-Au, CuAu3, and Cu3Au). We use finite temperature Monte Carlo calculations, in order to show the influence of 'heat treatment' in the low-temperature phase of the alloy. Although relatively simple, it has enough features that could be used as a first test of the reliability of the technique. The main questions to be answered in this work relate to the existence of low temperature ordered structures for specific concentrations, for example, the ability to distinguish between rather similar phases for equiatomic alloys (CuAu I and CuAu II, the latter characterized by an antiphase boundary separating two identical phases).
Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions
NASA Astrophysics Data System (ADS)
Ricketson, Lee
2013-10-01
We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-28
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
NASA Astrophysics Data System (ADS)
Zhang, Jun; Guo, Fan
2015-11-01
Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.
Asteroid orbital inversion using a virtual-observation Markov-chain Monte Carlo method
NASA Astrophysics Data System (ADS)
Muinonen, Karri; Granvik, Mikael; Oszkiewicz, Dagmara; Pieniluoma, Tuomo; Pentikäinen, Hanna
2012-12-01
A novel virtual-observation Markov-chain Monte Carlo method (MCMC) is presented for the asteroid orbital inverse problem posed by small to moderate numbers of astrometric observations. In the method, the orbital-element proposal probability density is chosen to mimic the convolution of the a posteriori density by itself: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, least-squares orbital elements are derived for the virtual observations using the Nelder-Mead downhill simplex method; third, repeating the procedure gives a difference between two sets of what can be called virtual least-squares elements; and, fourth, the difference obtained constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal density. In practice, the proposals are based on a large number of pre-computed sets of orbital elements. Virtual-observation MCMC is thus based on the characterization of the phase-space volume of solutions before the actual MCMC sampling. Virtual-observation MCMC is compared to MCMC orbital ranging, a random-walk Metropolis-Hastings algorithm based on sampling with the help of Cartesian positions at two observation dates, in the case of the near-Earth asteroid (85640) 1998 OX4. In the present preliminary comparison, the methods yield similar results for a 9.1-day observational time interval extracted from the full current astrometry of the asteroid. In the future, both of the methods are to be applied to the astrometric observations of the Gaia mission.
A MONTE-CARLO Method for Estimating Stellar Photometric Metallicity Distributions
NASA Astrophysics Data System (ADS)
Gu, Jiayin; Du, Cuihua; Jing, Yingjie; Zuo, Wenbo
2016-07-01
Based on the Sloan Digital Sky Survey, we develop a new Monte-Carlo-based method to estimate the photometric metallicity distribution function (MDF) for stars in the Milky Way. Compared with other photometric calibration methods, this method enables a more reliable determination of the MDF, particularly at the metal-poor and metal-rich ends. We present a comparison of our new method with a previous polynomial-based approach and demonstrate its superiority. As an example, we apply this method to main-sequence stars with 0.2\\lt g-r\\lt 0.6, 6 kpc < R < 9 kpc, and in different intervals in height above the plane, | Z| . The MDFs for the selected stars within two relatively local intervals (0.8 kpc \\lt | Z| \\lt 1.2 {{kpc}}, 1.5 kpc \\lt | Z| \\lt 2.5 kpc) can be well-fit by two Gaussians with peaks at [Fe/H] ≈ ‑0.6 and ‑1.2, respectively: one associated with the disk system and the other with the halo. The MDFs for the selected stars within two more distant intervals (3 kpc \\lt | Z| \\lt 5 {{kpc}}, 6 kpc \\lt | Z| \\lt 9 kpc) can be decomposed into three Gaussians with peaks at [Fe/H] ≈ ‑0.6, ‑1.4, and ‑1.9, respectively, where the two lower peaks may provide evidence for a two-component model of the halo: the inner halo and the outer halo. The number ratio between the disk component and halo component(s) decreases with vertical distance from the Galactic plane, which is consistent with the previous literature.
Mallett, M.W.; Poston, J.W.; Hickman, D.P.
1995-06-01
Research efforts towards developing a new method for calibrating in vivo measurement systems using magnetic resonance imaging (MRI) and Monte Carlo computations are discussed. The method employs the enhanced three-point Dixon technique for producing pure fat and pure water MR images of the human body. The MR images are used to define the geometry and composition of the scattering media for transport calculations using the general-purpose Monte Carlo code MCNP, Version 4. A sample case for developing the new method utilizing an adipose/muscle matrix is compared with laboratory measurements. Verification of the integrated MRI-MCNP method has been done for a specially designed phantom composed of fat, water, air, and a bone-substitute material. Implementation of the MRI-MCNP method is demonstrated for a low-energy, lung counting in vivo measurement system. Limitations and solutions regarding the presented method are discussed. 15 refs., 7 figs., 4 tabs.
NASA Technical Reports Server (NTRS)
Olynick, David P.; Hassan, H. A.; Moss, James N.
1988-01-01
A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.
Favorite, J.A.
1999-09-01
In previous work, exponential convergence of Monte Carlo solutions using the reduced source method with Legendre expansion has been achieved only in one-dimensional rod and slab geometries. In this paper, the method is applied to three-dimensional (right parallelepiped) problems, with resulting evidence suggesting success. As implemented in this paper, the method approximates an angular integral of the flux with a discrete-ordinates numerical quadrature. It is possible that this approximation introduces an inconsistency that must be addressed.
Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Wada, Takao
2014-07-01
A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.
Constant-pH Hybrid Nonequilibrium Molecular Dynamics-Monte Carlo Simulation Method.
Chen, Yunjie; Roux, Benoît
2015-08-11
A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys. 2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD-MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD-MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems. PMID:26300709
On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems
Walsh, Jon
2015-08-31
The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.
Da, B.; Sun, Y.; Ding, Z. J.; Mao, S. F.; Zhang, Z. M.; Jin, H.; Yoshikawa, H.; Tanuma, S.
2013-06-07
A reverse Monte Carlo (RMC) method is developed to obtain the energy loss function (ELF) and optical constants from a measured reflection electron energy-loss spectroscopy (REELS) spectrum by an iterative Monte Carlo (MC) simulation procedure. The method combines the simulated annealing method, i.e., a Markov chain Monte Carlo (MCMC) sampling of oscillator parameters, surface and bulk excitation weighting factors, and band gap energy, with a conventional MC simulation of electron interaction with solids, which acts as a single step of MCMC sampling in this RMC method. To examine the reliability of this method, we have verified that the output data of the dielectric function are essentially independent of the initial values of the trial parameters, which is a basic property of a MCMC method. The optical constants derived for SiO{sub 2} in the energy loss range of 8-90 eV are in good agreement with other available data, and relevant bulk ELFs are checked by oscillator strength-sum and perfect-screening-sum rules. Our results show that the dielectric function can be obtained by the RMC method even with a wide range of initial trial parameters. The RMC method is thus a general and effective method for determining the optical properties of solids from REELS measurements.
A Monte Carlo based three-dimensional dose reconstruction method derived from portal dose images
Elmpt, Wouter J. C. van; Nijsten, Sebastiaan M. J. J. G.; Schiffeleers, Robert F. H.; Dekker, Andre L. A. J.; Mijnheer, Ben J.; Lambin, Philippe; Minken, Andre W. H.
2006-07-15
The verification of intensity-modulated radiation therapy (IMRT) is necessary for adequate quality control of the treatment. Pretreatment verification may trace the possible differences between the planned dose and the actual dose delivered to the patient. To estimate the impact of differences between planned and delivered photon beams, a three-dimensional (3-D) dose verification method has been developed that reconstructs the dose inside a phantom. The pretreatment procedure is based on portal dose images measured with an electronic portal imaging device (EPID) of the separate beams, without the phantom in the beam and a 3-D dose calculation engine based on the Monte Carlo calculation. Measured gray scale portal images are converted into portal dose images. From these images the lateral scattered dose in the EPID is subtracted and the image is converted into energy fluence. Subsequently, a phase-space distribution is sampled from the energy fluence and a 3-D dose calculation in a phantom is started based on a Monte Carlo dose engine. The reconstruction model is compared to film and ionization chamber measurements for various field sizes. The reconstruction algorithm is also tested for an IMRT plan using 10 MV photons delivered to a phantom and measured using films at several depths in the phantom. Depth dose curves for both 6 and 10 MV photons are reconstructed with a maximum error generally smaller than 1% at depths larger than the buildup region, and smaller than 2% for the off-axis profiles, excluding the penumbra region. The absolute dose values are reconstructed to within 1.5% for square field sizes ranging from 5 to 20 cm width. For the IMRT plan, the dose was reconstructed and compared to the dose distribution with film using the gamma evaluation, with a 3% and 3 mm criterion. 99% of the pixels inside the irradiated field had a gamma value smaller than one. The absolute dose at the isocenter agreed to within 1% with the dose measured with an ionization
Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Kramer, Richard
2011-08-01
Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.
Velazquez, L; Castro-Palacio, J C
2015-03-01
Velazquez and Curilef [J. Stat. Mech. (2010); J. Stat. Mech. (2010)] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989)]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L)∝(1/L)z with exponent z≃0.26±0.02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L→+∞. PMID:25871247
Dynamic measurements and uncertainty estimation of clinical thermometers using Monte Carlo method
NASA Astrophysics Data System (ADS)
Ogorevc, Jaka; Bojkovski, Jovan; Pušnik, Igor; Drnovšek, Janko
2016-09-01
Clinical thermometers in intensive care units are used for the continuous measurement of body temperature. This study describes a procedure for dynamic measurement uncertainty evaluation in order to examine the requirements for clinical thermometer dynamic properties in standards and recommendations. In this study thermistors were used as temperature sensors, transient temperature measurements were performed in water and air and the measurement data were processed for the investigation of thermometer dynamic properties. The thermometers were mathematically modelled. A Monte Carlo method was implemented for dynamic measurement uncertainty evaluation. The measurement uncertainty was analysed for static and dynamic conditions. Results showed that dynamic uncertainty is much larger than steady-state uncertainty. The results of dynamic uncertainty analysis were applied on an example of clinical measurements and were compared to current requirements in ISO standard for clinical thermometers. It can be concluded that there was no need for dynamic evaluation of clinical thermometers for continuous measurement, while dynamic measurement uncertainty was within the demands of target uncertainty. Whereas in the case of intermittent predictive thermometers, the thermometer dynamic properties had a significant impact on the measurement result. Estimation of dynamic uncertainty is crucial for the assurance of traceable and comparable measurements.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y.; Yun, S.
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Assessment of the Contrast to Noise Ratio in PET Scanners with Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess the contrast to noise ratio (CNR) of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The PET scanner simulated was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution. Image quality was assessed in terms of the CNR. CNR was estimated from coronal reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL. OSMAPOSL reconstruction was assessed by using various subsets (3, 15 and 21) and various iterations (2 to 20). CNR values were found to decrease when both iterations and subsets increase. Two (2) iterations were found to be optimal. The simulated PET evaluation method, based on the TLC plane source, can be useful in image quality assessment of PET scanners.
NASA Astrophysics Data System (ADS)
Hantal, György; Picaud, Sylvain; Hoang, Paul N. M.; Voloshin, Vladimir P.; Medvedev, Nikolai N.; Jedlovszky, Pál
2010-10-01
The grand canonical Monte Carlo method is used to simulate the adsorption isotherms of water molecules on different types of model soot particles. These soot models are constructed by first removing atoms from onion-fullerene structures in order to create randomly distributed pores inside the soot, and then performing molecular dynamics simulations, based on the reactive adaptive intermolecular reactive empirical bond order (AIREBO) description of the interaction between carbon atoms, to optimize the resulting structures. The obtained results clearly show that the main driving force of water adsorption on soot is the possibility of the formation of new water-water hydrogen bonds with the already adsorbed water molecules. The shape of the calculated water adsorption isotherms at 298 K strongly depends on the possible confinement of the water molecules in pores of the carbonaceous structure. We found that there are two important factors influencing the adsorption ability of soot. The first of these factors, dominating at low pressures, is the ability of the soot of accommodating the first adsorbed water molecules at strongly hydrophilic sites. The second factor concerns the size and shape of the pores, which should be such that the hydrogen bonding network of the water molecules filling them should be optimal. This second factor determines the adsorption properties at higher pressures.
Hantal, György; Picaud, Sylvain; Hoang, Paul N M; Voloshin, Vladimir P; Medvedev, Nikolai N; Jedlovszky, Pál
2010-10-14
The grand canonical Monte Carlo method is used to simulate the adsorption isotherms of water molecules on different types of model soot particles. These soot models are constructed by first removing atoms from onion-fullerene structures in order to create randomly distributed pores inside the soot, and then performing molecular dynamics simulations, based on the reactive adaptive intermolecular reactive empirical bond order (AIREBO) description of the interaction between carbon atoms, to optimize the resulting structures. The obtained results clearly show that the main driving force of water adsorption on soot is the possibility of the formation of new water-water hydrogen bonds with the already adsorbed water molecules. The shape of the calculated water adsorption isotherms at 298 K strongly depends on the possible confinement of the water molecules in pores of the carbonaceous structure. We found that there are two important factors influencing the adsorption ability of soot. The first of these factors, dominating at low pressures, is the ability of the soot of accommodating the first adsorbed water molecules at strongly hydrophilic sites. The second factor concerns the size and shape of the pores, which should be such that the hydrogen bonding network of the water molecules filling them should be optimal. This second factor determines the adsorption properties at higher pressures. PMID:20950025
Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics
Hall, Howard L
2012-01-01
Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.
Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods
Narita, Y. |; Eberl, S.; Nakamura, T.
1996-12-31
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for {sup 99m}Tc and {sup 201}Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for {sup 99m}Tc with TDCS and TEW, respectively. For {sup 201}Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.
IR imaging simulation and analysis for aeroengine exhaust system based on reverse Monte Carlo method
NASA Astrophysics Data System (ADS)
Chen, Shiguo; Chen, Lihai; Mo, Dongla; Shi, Jingcheng
2014-11-01
The IR radiation characteristics of aeroengine are the important basis for IR stealth design and anti-stealth detection of aircraft. With the development of IR imaging sensor technology, the importance of aircraft IR stealth increases. An effort is presented to explore target IR radiation imaging simulation based on Reverse Monte Carlo Method (RMCM), which combined with the commercial CFD software. Flow and IR radiation characteristics of an aeroengine exhaust system are investigated, which developing a full size geometry model based on the actual parameters, using a flow-IR integration structured mesh, obtaining the engine performance parameters as the inlet boundary conditions of mixer section, and constructing a numerical simulation model of engine exhaust system of IR radiation characteristics based on RMCM. With the above models, IR radiation characteristics of aeroengine exhaust system is given, and focuses on the typical detecting band of IR spectral radiance imaging at azimuth 20°. The result shows that: (1) in small azimuth angle, the IR radiation is mainly from the center cone of all hot parts; near the azimuth 15°, mixer has the biggest radiation contribution, while center cone, turbine and flame stabilizer equivalent; (2) the main radiation components and space distribution in different spectrum is different, CO2 at 4.18, 4.33 and 4.45 micron absorption and emission obviously, H2O at 3.0 and 5.0 micron absorption and emission obviously.
A new method for RGB to CIELAB color space transformation based on Markov chain Monte Carlo
NASA Astrophysics Data System (ADS)
Chen, Yajun; Liu, Ding; Liang, Junli
2013-10-01
During printing quality inspection, the inspection of color error is an important content. However, the RGB color space is device-dependent, usually RGB color captured from CCD camera must be transformed into CIELAB color space, which is perceptually uniform and device-independent. To cope with the problem, a Markov chain Monte Carlo (MCMC) based algorithms for the RGB to the CIELAB color space transformation is proposed in this paper. Firstly, the modeling color targets and testing color targets is established, respectively used in modeling and performance testing process. Secondly, we derive a Bayesian model for estimation the coefficients of a polynomial, which can be used to describe the relation between RGB and CIELAB color space. Thirdly, a Markov chain is set up base on Gibbs sampling algorithm (one of the MCMC algorithm) to estimate the coefficients of polynomial. Finally, the color difference of testing color targets is computed for evaluating the performance of the proposed method. The experimental results showed that the nonlinear polynomial regression based on MCMC algorithm is effective, whose performance is similar to the least square approach and can accurately model the RGB to the CIELAB color space conversion and guarantee the color error evaluation for printing quality inspection system.
Improving Bayesian analysis for LISA Pathfinder using an efficient Markov Chain Monte Carlo method
NASA Astrophysics Data System (ADS)
Ferraioli, Luigi; Porter, Edward K.; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Gibert, Ferran; Hewitson, Martin; Hueller, Mauro; Karnesis, Nikolaos; Korsakova, Natalia; Nofrarias, Miquel; Plagnol, Eric; Vitale, Stefano
2014-02-01
We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of the LISA Pathfinder satellite. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to LISA Pathfinder data. For this experiment, we return parameter values that are all within ˜1 σ of the injected values. When we analyse the accuracy of our parameter estimation in terms of the effect they have on the force-per-unit of mass noise, we find that the induced errors are three orders of magnitude less than the expected experimental uncertainty in the power spectral density.
NASA Astrophysics Data System (ADS)
Shepherd, James J.; Booth, George H.; Alavi, Ali
2012-06-01
Using the homogeneous electron gas (HEG) as a model, we investigate the sources of error in the "initiator" adaptation to full configuration interaction quantum Monte Carlo (i-FCIQMC), with a view to accelerating convergence. In particular, we find that the fixed-shift phase, where the walker number is allowed to grow slowly, can be used to effectively assess stochastic and initiator error. Using this approach we provide simple explanations for the internal parameters of an i-FCIQMC simulation. We exploit the consistent basis sets and adjustable correlation strength of the HEG to analyze properties of the algorithm, and present finite basis benchmark energies for N = 14 over a range of densities 0.5 ⩽ rs ⩽ 5.0 a.u. A single-point extrapolation scheme is introduced to produce complete basis energies for 14, 38, and 54 electrons. It is empirically found that, in the weakly correlated regime, the computational cost scales linearly with the plane wave basis set size, which is justifiable on physical grounds. We expect the fixed-shift strategy to reduce the computational cost of many i-FCIQMC calculations of weakly correlated systems. In addition, we provide benchmarks for the electron gas, to be used by other quantum chemical methods in exploring periodic solid state systems.
NASA Astrophysics Data System (ADS)
Velazquez, L.; Castro-Palacio, J. C.
2015-03-01
Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.
2015-09-08
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakage frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.
2015-09-08
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less
Avrorin, E. N.; Tsvetokhin, A. G.; Xenofontov, A. I.; Kourbatova, E. I.; Regens, J. L.
2002-02-26
This paper presents the results of an ongoing research and development project conducted by Russian institutions in Moscow and Snezhinsk, supported by the International Science and Technology Center (ISTC), in collaboration with the University of Oklahoma. The joint study focuses on developing and applying analytical tools to effectively characterize contaminant transport and assess risks associated with migration of radionuclides and heavy metals in the water column and sediments of large reservoirs or lakes. The analysis focuses on the development and evaluation of theoretical-computational models that describe the distribution of radioactive wastewater within a reservoir and characterize the associated radiation field as well as estimate doses received from radiation exposure. The analysis focuses on the development and evaluation of Monte Carlo-based, theoretical-computational methods that are applied to increase the precision of results and to reduce computing time for estimating the characteristics the radiation field emitted from the contaminated wastewater layer. The calculated migration of radionuclides is used to estimate distributions of radiation doses that could be received by an exposed population based on exposure to radionuclides from specified volumes of discrete aqueous sources. The calculated dose distributions can be used to support near-term and long-term decisions about priorities for environmental remediation and stewardship.
NASA Astrophysics Data System (ADS)
Umezawa, Naoto; Tsuneyuki, Shinji
2002-03-01
We will present a new approach to calculate the electronic states of the strongly correlated systems. Our calculation is based on the transcorrelated method given by Boys and Handy, in which the eigen problem of the original Hamiltonian is transformed to an identical eigen problem of an effective Hamiltonian. The effective Hamiltonian is constructed from the original Hamiltonian multiplied by a Jastrow function representing the correlation effect from the right side and by the inverse of the Jastrow function from the left side. A single Slater determinant is assumed as a trial wave function and both of the Slater determinant and the Jastrow function are optimized in order to satisfy the eigen equation of the effective Hamiltonian as thoroughly as possible. One-body wave functions are determined by solving a set of Hartree-Fock-like self-consistent field equations, which contain a spin-dependent effective electron-electron interaction. Further, the parameters in the Jastrow function are optimized by a variational Monte Carlo (VMC) calculation. These two optimizations are implemented self-consistently, and we find that the converged ground state energies of the two-electron systems (H^-,He,Li^+,Be^2+,H_2) are much better described than by the usual VMC calculation utilizing Hartree-Fock orbitals for the Slater determinant.
[Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].
Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao
2015-05-01
Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 °C. The boiling range of kerosene is 140~250 °C. And the boiling range of diesel is 180~370 °C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are
Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling
Kraan, Aafke Christine
2015-01-01
Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β+ emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects. PMID:26217586
Towards prediction of correlated material properties using quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Wagner, Lucas
Correlated electron systems offer a richness of physics far beyond noninteracting systems. If we would like to pursue the dream of designer correlated materials, or, even to set a more modest goal, to explain in detail the properties and effective physics of known materials, then accurate simulation methods are required. Using modern computational resources, quantum Monte Carlo (QMC) techniques offer a way to directly simulate electron correlations. I will show some recent results on a few extremely challenging materials including the metal-insulator transition of VO2, the ground state of the doped cuprates, and the pressure dependence of magnetic properties in FeSe. By using a relatively simple implementation of QMC, at least some properties of these materials can be described truly from first principles, without any adjustable parameters. Using the QMC platform, we have developed a way of systematically deriving effective lattice models from the simulation. This procedure is particularly attractive for correlated electron systems because the QMC methods treat the one-body and many-body components of the wave function and Hamiltonian on completely equal footing. I will show some examples of using this downfolding technique and the high accuracy of QMC to connect our intuitive ideas about interacting electron systems with high fidelity simulations. The work in this presentation was supported in part by NSF DMR 1206242, the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award Number FG02-12ER46875, and the Center for Emergent Superconductivity, Department of Energy Frontier Research Center under Grant No. DEAC0298CH1088. Computing resources were provided by a Blue Waters Illinois grant and INCITE PhotSuper and SuperMatSim allocations.
NASA Astrophysics Data System (ADS)
Ghita, Gabriel M.
Our study aim to design a useful neutron signature characterization device based on 3He detectors, a standard neutron detection methodology used in homeland security applications. Research work involved simulation of the generation, transport, and detection of the leakage radiation from Special Nuclear Materials (SNM). To accomplish research goals, we use a new methodology to fully characterize a standard "1-Ci" Plutonium-Beryllium (Pu-Be) neutron source based on 3-D computational radiation transport methods, employing both deterministic SN and Monte Carlo methodologies. Computational model findings were subsequently validated through experimental measurements. Achieved results allowed us to design, build, and laboratory-test a Nickel composite alloy shield that enables the neutron leakage spectrum from a standard Pu-Be source to be transformed, through neutron scattering interactions in the shield, into a very close approximation of the neutron spectrum leaking from a large, subcritical mass of Weapons Grade Plutonium (WGPu) metal. This source will make possible testing with a nearly exact reproduction of the neutron spectrum from a 6.67 kg WGPu mass equivalent, but without the expense or risk of testing detector components with real materials. Moreover, over thirty moderator materials were studied in order to characterize their neutron energy filtering potential. Specific focus was made to establish the limits of He-3 spectroscopy using ideal filter materials. To demonstrate our methodology, we present the optimally detected spectral differences between SNM materials (Plutonium and Uranium), metal and oxide, using ideal filter materials. Finally, using knowledge gained from previous studies, the design of a He-3 spectroscopy system neutron detector, simulated entirely via computational methods, is proposed to resolve the spectra from SNM neutron sources of high interest. This was accomplished by replacing ideal filters with real materials, and comparing reaction
NASA Astrophysics Data System (ADS)
Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan
2015-10-01
Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.
Monte Carlo particle-in-cell methods for the simulation of the Vlasov-Maxwell gyrokinetic equations
NASA Astrophysics Data System (ADS)
Bottino, A.; Sonnendrücker, E.
2015-10-01
> The particle-in-cell (PIC) algorithm is the most popular method for the discretisation of the general 6D Vlasov-Maxwell problem and it is widely used also for the simulation of the 5D gyrokinetic equations. The method consists of coupling a particle-based algorithm for the Vlasov equation with a grid-based method for the computation of the self-consistent electromagnetic fields. In this review we derive a Monte Carlo PIC finite-element model starting from a gyrokinetic discrete Lagrangian. The variations of the Lagrangian are used to obtain the time-continuous equations of motion for the particles and the finite-element approximation of the field equations. The Noether theorem for the semi-discretised system implies a certain number of conservation properties for the final set of equations. Moreover, the PIC method can be interpreted as a probabilistic Monte Carlo like method, consisting of calculating integrals of the continuous distribution function using a finite set of discrete markers. The nonlinear interactions along with numerical errors introduce random effects after some time. Therefore, the same tools for error analysis and error reduction used in Monte Carlo numerical methods can be applied to PIC simulations.
Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lössl, K.; Aebersold, D. M.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-03-15
Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan
Modeling radiation from the atmosphere of Io with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Gratiy, Sergey
Conflicting observations regarding the dominance of either sublimation or volcanism as the source of the atmosphere on Io and disparate reports on the extent of its spatial distribution and the absolute column abundance invite the development of detailed computational models capable of improving our understanding of Io's unique atmospheric structure and origin. To validate a global numerical model of Io's atmosphere against astronomical observations requires a 3-D spherical-shell radiative transfer (RT) code to simulate disk-resolved images and disk-integrated spectra from the ultraviolet to the infrared spectral region. In addition, comparison of simulated and astronomical observations provides important information to improve existing atmospheric models. In order to achieve this goal, a new 3-D spherical-shell forward/backward photon Monte Carlo code capable of simulating radiation from absorbing/emitting and scattering atmospheres with an underlying emitting and reflecting surface was developed. A new implementation of calculating atmospheric brightness in scattered sunlight is presented utilizing the notion of an "effective emission source" function. This allows for the accumulation of the scattered contribution along the entire path of a ray and the calculation of the atmospheric radiation when both scattered sunlight and thermal emission contribute to the observed radiation---which was not possible in previous models. A "polychromatic" algorithm was developed for application with the backward Monte Carlo method and was implemented in the code. It allows one to calculate radiative intensity at several wavelengths simultaneously, even when the scattering properties of the atmosphere are a function of wavelength. The application of the "polychromatic" method improves the computational efficiency because it reduces the number of photon bundles traced during the simulation. A 3-D gas dynamics model of Io's atmosphere, including both sublimation and volcanic
Adaptive {delta}f Monte Carlo Method for Simulation of RF-heating and Transport in Fusion Plasmas
Hoeoek, J.; Hellsten, T.
2009-11-26
Essential for modeling heating and transport of fusion plasma is determining the distribution function of the plasma species. Characteristic for RF-heating is creation of particle distributions with a high energy tail. In the high energy region the deviation from a Maxwellian distribution is large while in the low energy region the distribution is close to a Maxwellian due to the velocity dependency of the collision frequency. Because of geometry and orbit topology Monte Carlo methods are frequently used. To avoid simulating the thermal part, {delta}f methods are beneficial. Here we present a new {delta}f Monte Carlo method with an adaptive scheme for reducing the total variance and sources, suitable for calculating the distribution function for RF-heating.
Geometrically-compatible 3-D Monte Carlo and discrete-ordinates methods
Morel, J.E.; Wareing, T.A.; McGhee, J.M.; Evans, T.M.
1998-12-31
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project was two-fold. The first purpose was to develop a deterministic discrete-ordinates neutral-particle transport scheme for unstructured tetrahedral spatial meshes, and implement it in a computer code. The second purpose was to modify the MCNP Monte Carlo radiation transport code to use adjoint solutions from the tetrahedral-mesh discrete-ordinates code to reduce the statistical variance of Monte Carlo solutions via a weight-window approach. The first task has resulted in a deterministic transport code that is much more efficient for modeling complex 3-D geometries than any previously existing deterministic code. The second task has resulted in a powerful new capability for dramatically reducing the cost of difficult 3-D Monte Carlo calculations.
Capote, Roberto Smith, Donald L.
2008-12-15
The Unified Monte Carlo method (UMC) has been suggested to avoid certain limitations and approximations inherent to the well-known Generalized Least Squares (GLS) method of nuclear data evaluation. This contribution reports on an investigation of the performance of the UMC method in comparison with the GLS method. This is accomplished by applying both methods to simple examples with few input values that were selected to explore various features of the evaluation process that impact upon the quality of an evaluation. Among the issues explored are: i) convergence of UMC results with the number of Monte Carlo histories and the ranges of sampled values; ii) a comparison of Monte Carlo sampling using the Metropolis scheme and a brute force approach; iii) the effects of large data discrepancies; iv) the effects of large data uncertainties; v) the effects of strong or weak model or experimental data correlations; and vi) the impact of ratio data and integral data. Comparisons are also made of the evaluated results for these examples when the input values are first transformed to comparable logarithmic values prior to performing the evaluation. Some general conclusions that are applicable to more realistic evaluation exercises are offered.
NASA Astrophysics Data System (ADS)
Harries, Tim J.
2015-04-01
We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelization method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion on to, and the growth of, the protostars. We detail the results of extensive testing and benchmarking of the new algorithms.
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking
Simulation of aggregating particles in complex flows by the lattice kinetic Monte Carlo method
NASA Astrophysics Data System (ADS)
Flamm, Matthew H.; Sinno, Talid; Diamond, Scott L.
2011-01-01
We develop and validate an efficient lattice kinetic Monte Carlo (LKMC) method for simulating particle aggregation in laminar flows with spatially varying shear rate, such as parabolic flow or flows with standing vortices. A contact time model was developed to describe the particle-particle collision efficiency as a function of the local shear rate, G, and approach angle, θ. This model effectively accounts for the hydrodynamic interactions between approaching particles, which is not explicitly considered in the LKMC framework. For imperfect collisions, the derived collision efficiency [\\varepsilon = 1 - int_0^{{π {π /2} {sin θ exp ( { - 2\\cot θ {{Γ _{agg} }/ { Γ _{agg} } G} )} dθ] was found to depend only on Γagg/G, where Γagg is the specified aggregation rate. For aggregating platelets in tube flow, Γ _{agg} = 0.683 s-1 predicts the experimentally measured ɛ across a physiological range (G = 40-1000 s-1) and is consistent with α2bβ3-fibrinogen bond dynamics. Aggregation in parabolic flow resulted in the largest aggregates forming near the wall where shear rate and residence time were maximal, however intermediate regions between the wall and the center exhibited the highest aggregation rate due to depletion of reactants nearest the wall. Then, motivated by stenotic or valvular flows, we employed the LKMC simulation developed here for baffled geometries that exhibit regions of squeezing flow and standing recirculation zones. In these calculations, the largest aggregates were formed within the vortices (maximal residence time), while squeezing flow regions corresponded to zones of highest aggregation rate.
Recovering the inflationary potential: An analysis using flow methods and Markov chain Monte Carlo
NASA Astrophysics Data System (ADS)
Powell, Brian A.
Since its inception in 1980 by Guth [1], inflation has emerged as the dominant paradigm for describing the physics of the early universe. While inflation has matured theoretically over two decades, it has only recently begun to be rigorously tested observationally. Measurements of the cosmic microwave background (CMB) and large-scale structure surveys (LSS) have begun to unravel the mysteries of the inflationary epoch with exquisite and unprecedented accuracy. This thesis is a contribution to the effort of reconstructing the physics of inflation. This information is largely encoded in the potential energy function of the inflaton, the field that drives the inflationary expansion. With little theoretical guidance as to the probable form of this potential, reconstruction is a predominantly data-driven endeavor. This thesis presents an investigation of the constrainability of the inflaton potential given current CMB and LSS data. We develop a methodology based on the inflationary flow formalism that provides an assessment of our current ability to resolve the form of the inflaton potential in the face of experimental and statistical error. We find that there is uncertainty regarding the initial dynamics of the inflaton field, related to the poor constraints that can be drawn on the primordial power spectrum on large scales. We also investigate the future prospects of potential reconstruction, as might be expected when data from ESA's Planck Surveyor becomes available. We develop an approach that utilizes Markov chain Monte Carlo to analyze the statistical properties of the inflaton potential. Besides providing constraints on the parameters of the potential, this method makes it possible to perform model selection on the inflationary model space. While future data will likely determine the general features of the inflaton, there will likely be many different models that remain good fits to the data. Bayesian model selection will then be needed to draw comparisons
Use of Monte Carlo methods in environmental risk assessments at the INEL: Applications and issues
Harris, G.; Van Horn, R.
1996-06-01
The EPA is increasingly considering the use of probabilistic risk assessment techniques as an alternative or refinement of the current point estimate of risk. This report provides an overview of the probabilistic technique called Monte Carlo Analysis. Advantages and disadvantages of implementing a Monte Carlo analysis over a point estimate analysis for environmental risk assessment are discussed. The general methodology is provided along with an example of its implementation. A phased approach to risk analysis that allows iterative refinement of the risk estimates is recommended for use at the INEL.
Gamma-ray spectrometry analysis of pebble bed reactor fuel using Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Chen, Jianwei; Hawari, Ayman I.; Zhao, Zhongxiang; Su, Bingjing
2003-06-01
Monte Carlo simulations were used to study the gamma-ray spectra of pebble bed reactor fuel at various levels of burnup. A fuel depletion calculation was performed using the ORIGEN2.1 code, which yielded the gamma-ray source term that was introduced into the input of an MCNP4C simulation. The simulation assumed the use of a 100% efficient high-purity coaxial germanium (HPGe) detector, a pebble placed at a distance of 100 cm from the detector, and accounted for Gaussian broadening of the gamma-ray peaks. Previously, it was shown that 137Cs, 60Co (introduced as a dopant), and 134Cs are the relevant burnup indicators. The results show that the 662 keV line of 137Cs lies in close proximity to the intense 658 keV of 197Nb, which results in spectral interference between the lines. However, the 1333 keV line of 60Co, and selected 134Cs lines (e.g., at 605 keV) are free from spectral interference, which enhances the possibility of their utilization as relative burnup indicators.
NASA Astrophysics Data System (ADS)
Ceballos, C.; Baldazzi, G.; Bollini, D.; Cabal Rodríguez, A. E.; Dabrowski, W.; Días García, A.; Gambaccini, M.; Giubellino, P.; Gombia, M.; Grybos, P.; Idzik, M.; Marzari-Chiesa, A.; Montaño, L. M.; Prino, F.; Ramello, L.; Sitta, M.; Swientek, K.; Taibi, A.; Tomassi, E.; Tuffanelli, A.; Wiacek, P.
2003-09-01
We present First results of Monte Carlo simulation by the general purpose MCNP-4C transport code of an experimental facility at Bologna S. Orsola hospital for studying the possible application of a X-Ray detection system based on a silicon strip detector on a dual energy angiography. The quasi-monochromatic X-ray beam with the detector in the edge-on configuration has been used to acquire images of a test object at two different energies (namely 31 and 35 keV) suitable for the K-edge subtraction angiography application. As a test object a Plexiglas step wedge phantom with four cylindrical cavities, having 1 mm diameter was used. The cavities have been drilled and filled, with iodated contrast medium, whose concentration varied from 370 mg/ml to 92 mg/ml. Both the profiles obtained from measurements and the generated images where reproduced by computer simulation on a first approach to use this technique as an evaluation tool for future developments on the experimental setup.
Cho, S; Shin, E H; Kim, J; Ahn, S H; Chung, K; Kim, D-H; Han, Y; Choi, D H
2015-06-15
Purpose: To evaluate the shielding wall design to protect patients, staff and member of the general public for secondary neutron using a simply analytic solution, multi-Monte Carlo code MCNPX, ANISN and FLUKA. Methods: An analytical and multi-Monte Carlo method were calculated for proton facility (Sumitomo Heavy Industry Ltd.) at Samsung Medical Center in Korea. The NCRP-144 analytical evaluation methods, which produced conservative estimates on the dose equivalent values for the shielding, were used for analytical evaluations. Then, the radiation transport was simulated with the multi-Monte Carlo code. The neutron dose at evaluation point is got by the value using the production of the simulation value and the neutron dose coefficient introduced in ICRP-74. Results: The evaluation points of accelerator control room and control room entrance are mainly influenced by the point of the proton beam loss. So the neutron dose equivalent of accelerator control room for evaluation point is 0.651, 1.530, 0.912, 0.943 mSv/yr and the entrance of cyclotron room is 0.465, 0.790, 0.522, 0.453 mSv/yr with calculation by the method of NCRP-144 formalism, ANISN, FLUKA and MCNP, respectively. The most of Result of MCNPX and FLUKA using the complicated geometry showed smaller values than Result of ANISN. Conclusion: The neutron shielding for a proton therapy facility has been evaluated by the analytic model and multi-Monte Carlo methods. We confirmed that the setting of shielding was located in well accessible area to people when the proton facility is operated.
Somasundaram, E.; Palmer, T. S.
2013-07-01
In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)
Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Bin; Wang, Lin; Peng, Kuan; Liang, Jimin; Tian, Jie
2010-01-01
During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results. PMID:20689705
IMRT dose delivery effects in radiotherapy treatment planning using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Tyagi, Neelam
Inter- and intra-leaf transmission and head scatter can play significant roles in Intensity Modulated Radiation Therapy (IMRT)-based treatment deliveries. In order to accurately calculate the dose in the IMRT planning process, it is therefore important that the detailed geometry of the multi-leaf collimator (MLC), in addition to other components in the accelerator treatment head be accurately modeled. In this thesis Monte Carlo (MC) methods have been used to model the treatment head of a Varian linear accelerator. A comprehensive model of the Varian 120-leaf MLC has been developed within the DPM MC code and has been verified against measurements in homogeneous and heterogeneous phantom geometries under different IMRT delivery circumstances. Accuracy of the MLC model in simulating details in the leaf geometry has been established over a range of arbitrarily shaped fields and IMRT fields. A sensitivity analysis of the effect of the electron-on-target parameters and the structure of the flattening filter on the accuracy of calculated dose distributions has been conducted. Adjustment of the electron-on-target parameters resulting in optimal agreement with measurements was an iterative process, with the final parameters representing a tradeoff between small (3x3 cm2) and large (40x40 cm2) field sizes. A novel method based on adaptive kernel density estimation, in the phase space simulation process is also presented as an alternative to particle recycling. Using this model dosimetric differences between MLC-based static (SMLC) and dynamic (DMLC) deliveries have been investigated. Differences between SMLC and DMLC, possibly related to fluence and/or spectral changes, appear to vary systematically with the density of the medium. The effect of fluence modulation due to leaf sequencing shows differences, up to 10% between plans developed with 1% and 10% fluence intervals for both SMLC and DMLC-delivered sequences. Dose differences between planned and delivered leaf sequences
NASA Astrophysics Data System (ADS)
Das, Arnab; Chakrabarti, Bikas K.
2008-12-01
Here we discuss the annealing behavior of an infinite-range ±J Ising spin glass in the presence of a transverse field using a zero-temperature quantum Monte Carlo method. Within the simulation scheme, we demonstrate that quantum annealing not only helps finding the ground state of a classical spin glass, but can also help simulating the ground state of a quantum spin glass, in particular, when the transverse field is low, much more efficiently.
Das, Arnab; Chakrabarti, Bikas K
2008-12-01
Here we discuss the annealing behavior of an infinite-range +/-J Ising spin glass in the presence of a transverse field using a zero-temperature quantum Monte Carlo method. Within the simulation scheme, we demonstrate that quantum annealing not only helps finding the ground state of a classical spin glass, but can also help simulating the ground state of a quantum spin glass, in particular, when the transverse field is low, much more efficiently. PMID:19256816
NASA Astrophysics Data System (ADS)
Makarevich, K. O.; Minenko, V. F.; Verenich, K. A.; Kuten, S. A.
2016-05-01
This work is dedicated to modeling dental radiographic examinations to assess the absorbed doses of patients and effective doses. For simulating X-ray spectra, the TASMIP empirical model is used. Doses are assessed on the basis of the Monte Carlo method by using MCNP code for voxel phantoms of ICRP. The results of the assessment of doses to individual organs and effective doses for different types of dental examinations and features of X-ray tube are presented.
D'Angola, A.; Tuttafesta, M.; Guadagno, M.; Santangelo, P.; Laricchiuta, A.; Colonna, G.; Capitelli, M.
2012-11-27
Calculations of thermodynamic properties of Helium plasma by using the Reaction Ensemble Monte Carlo method (REMC) are presented. Non ideal effects at high pressure are observed. Calculations, performed by using Exp-6 or multi-potential curves in the case of neutral-charge interactions, show that in the thermodynamic conditions considered no significative differences are observed. Results have been obtained by using a Graphics Processing Unit (GPU)-CUDA C version of REMC.
Magnetic interpretation by the Monte Carlo method with application to the intrusion of the Crimea
NASA Astrophysics Data System (ADS)
Gryshchuk, Pavlo
2014-05-01
The study involves the application of geophysical methods for geological mapping. Magnetic and radiometric measurements were used to delineate the intrusive bodies in Bakhchysarai region of the Crimea. Proton magnetometers used to measure the total magnetic field in the area and variation station. Scintillation radiometer used to determine the radiation dose. Due to susceptimeter measured the magnetic susceptibility of rocks. It deal with the fact that in this area of research the rock mass appears on the surface. Anomalous values of the magnetic intensity were obtained as the difference between the observed measurements and values on variation station. Through geophysical data were given maps of the anomalous magnetic field, radiation dose, and magnetic susceptibility. Geology of area consisted from magmatic rocks and overlying sedimentary rocks. The main task of research was to study the geometry and the magnetization vector of igneous rocks. Intrusive body composed of diabase and had an average magnetic susceptibility, weak dose rate and negative magnetic field. Sedimentary rocks were represented by clays. They had a low value of the magnetic susceptibility and the average dose rate. Map of magnetic susceptibility gave information about the values and distribution of magnetized bodies close to the surface. These data were used to control and elaboration the data of the magnetic properties for magnetic modelling. Magnetic anomaly map shows the distribution of magnetization in depth. Interpretation profile was located perpendicular to the strike of the intrusive body. Modelling was performed for profile of the magnetic field. Used the approach for filling by rectangular blocks of geological media. The fitting implemented for value magnetization and its vector for ever block. Fitting was carried out using the Monte Carlo method in layers from the bottom to top. After passing through all the blocks were fixed magnetic parameters of the block with the best
Kumar, Sudhir; Srinivasan, P; Sharma, S D; Saxena, Sanjay Kumar; Bakshi, A K; Dash, Ashutosh; Babu, D A R; Sharma, D N
2015-09-01
Isotope production and Application Division of Bhabha Atomic Research Center developed (32)P patch sources for treatment of superficial tumors. Surface dose rate of a newly developed (32)P patch source of nominal diameter 25 mm was measured experimentally using standard extrapolation ionization chamber and Gafchromic EBT film. Monte Carlo model of the (32)P patch source along with the extrapolation chamber was also developed to estimate the surface dose rates from these sources. The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and radiochromic films are 82.03±4.18 (k=2) and 79.13±2.53 (k=2) respectively. The two values of the surface dose rates measured using the two independent experimental methods are in good agreement to each other within a variation of 3.5%. The surface dose rate to tissue (cGy/min) estimated using the MCNP Monte Carlo code works out to be 77.78±1.16 (k=2). The maximum deviation between the surface dose rates to tissue obtained by Monte Carlo and the extrapolation chamber method is 5.2% whereas the difference between the surface dose rates obtained by radiochromic film measurement and the Monte Carlo simulation is 1.7%. The three values of the surface dose rates of the (32)P patch source obtained by three independent methods are in good agreement to one another within the uncertainties associated with their measurements and calculation. This work has demonstrated that MCNP based electron transport simulations are accurate enough for determining the dosimetry parameters of the indigenously developed (32)P patch sources for contact brachytherapy applications. PMID:26086681
NASA Astrophysics Data System (ADS)
Schlijper, A. G.; van Bergen, A. R. D.; Smit, B.
1990-01-01
We present and demonstrate an accurate, reliable, and computationally cheap method for the calculation of free energies in Monte Carlo simulations of lattice models. Even in the critical region it yields good results with comparatively short simulation runs. The method combines upper and lower bounds on the thermodynamic limit entropy density to yield not only an accurate estimate of the free energy but a bound on the possible error as well. The method is demonstrated on the two- and three-dimensional Ising models and the three-dimensional, three-states Potts model.
Alrefae, T
2014-12-01
A simple method of efficiency calibration for gamma spectrometry was performed. This method, which focused on measuring the radioactivity of (137)Cs in food samples, was based on Monte Carlo simulations available in the free-of-charge toolkit GEANT4. Experimentally, the efficiency values of a high-purity germanium detector were calculated for three reference materials representing three different food items. These efficiency values were compared with their counterparts produced by a computer code that simulated experimental conditions. Interestingly, the output of the simulation code was in acceptable agreement with the experimental findings, thus validating the proposed method. PMID:24214912
Development of CT scanner models for patient organ dose calculations using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Gu, Jianwei
There is a serious and growing concern about the CT dose delivered by diagnostic CT examinations or image-guided radiation therapy imaging procedures. To better understand and to accurately quantify radiation dose due to CT imaging, Monte Carlo based CT scanner models are needed. This dissertation describes the development, validation, and application of detailed CT scanner models including a GE LightSpeed 16 MDCT scanner and two image guided radiation therapy (IGRT) cone beam CT (CBCT) scanners, kV CBCT and MV CBCT. The modeling process considered the energy spectrum, beam geometry and movement, and bowtie filter (BTF). The methodology of validating the scanner models using reported CTDI values was also developed and implemented. Finally, the organ doses to different patients undergoing CT scan were obtained by integrating the CT scanner models with anatomically-realistic patient phantoms. The tube current modulation (TCM) technique was also investigated for dose reduction. It was found that for RPI-AM, thyroid, kidneys and thymus received largest dose of 13.05, 11.41 and 11.56 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. For RPI-AF, thymus, small intestine and kidneys received largest dose of 10.28, 12.08 and 11.35 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. The dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. For MDCT with TCM schemas, the fetal dose can be reduced with 14%-25%. To demonstrate the applicability of the method proposed in this dissertation for modeling the CT scanner, additional MDCT scanner was modeled and validated by using the measured CTDI values. These results demonstrated that the
Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method
Basire, M.; Soudan, J.-M.; Angelié, C.
2014-09-14
The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the σ-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, g{sub p}(E{sub p}) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called “corrected EAM” (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients S{sub ij} are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature T{sub m} is plotted in terms of the cluster atom number N{sub at}. The standard N{sub at}{sup −1/3} linear dependence (Pawlow law) is observed for N{sub at} >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For N{sub at} <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I.
NASA Astrophysics Data System (ADS)
Medhat, M. E.; Demir, Nilgun; Akar Tarim, Urkiye; Gurler, Orhan
2014-08-01
Monte Carlo simulations, FLUKA and Geant4, were performed to study mass attenuation for various types of soil at 59.5, 356.5, 661.6, 1173.2 and 1332.5 keV photon energies. Appreciable variations are noted for all parameters by changing the photon energy and the chemical composition of the sample. The simulations parameters were compared with experimental data and the XCOM program. The simulations show that the calculated mass attenuation coefficient values were closer to experimental values better than those obtained theoretically using the XCOM database for the same soil samples. The results indicate that Geant4 and FLUKA can be applied to estimate mass attenuation for various biological materials at different energies. The Monte Carlo method may be employed to make additional calculations on the photon attenuation characteristics of different soil samples collected from other places.
A two-dimensional simulation of the GEC RF reference cell using a hybrid fluid-Monte Carlo method
Pak, H.; Riley, M.E.
1992-12-01
A two-dimensional fluid-Monte Carlo hybrid model is used to simulate the GEC reference cell. The 2-D model assumes azimuthal symmetry and accounts for the ground shield about the electrodes as well as the grounded chamber walls. The hybrid model consists of a Monte Carlo method for generating rates and a fluid model for transporting electrons and ions. In the fluid model, the electrons are transported using the continuity equation; and the electric fields are solved self-consistently using Poisson`s equation. The Monte Carlo model transports electrons using the fluid-generated periodic electric field. The ionization rates are then obtained using the electron energy distribution function. An averaging method is used to speed the solution by transporting the ions in a time-averaged electric field with a corrected ambipolar-type diffusion. The simulation switches between the conventional and the averaging fluid model. Typically, the simulation runs from 10`s to 100`s of averaging fluid cycles before reentering the conventional fluid model for 10`s of cycles. Speed increases of a factor of 100 are possible.
Prytkova, Vera; Heyden, Matthias; Khago, Domarin; Freites, J Alfredo; Butts, Carter T; Martin, Rachel W; Tobias, Douglas J
2016-08-25
We present a novel multi-conformation Monte Carlo simulation method that enables the modeling of protein-protein interactions and aggregation in crowded protein solutions. This approach is relevant to a molecular-scale description of realistic biological environments, including the cytoplasm and the extracellular matrix, which are characterized by high concentrations of biomolecular solutes (e.g., 300-400 mg/mL for proteins and nucleic acids in the cytoplasm of Escherichia coli). Simulation of such environments necessitates the inclusion of a large number of protein molecules. Therefore, computationally inexpensive methods, such as rigid-body Brownian dynamics (BD) or Monte Carlo simulations, can be particularly useful. However, as we demonstrate herein, the rigid-body representation typically employed in simulations of many-protein systems gives rise to certain artifacts in protein-protein interactions. Our approach allows us to incorporate molecular flexibility in Monte Carlo simulations at low computational cost, thereby eliminating ambiguities arising from structure selection in rigid-body simulations. We benchmark and validate the methodology using simulations of hen egg white lysozyme in solution, a well-studied system for which extensive experimental data, including osmotic second virial coefficients, small-angle scattering structure factors, and multiple structures determined by X-ray and neutron crystallography and solution NMR, as well as rigid-body BD simulation results, are available for comparison. PMID:27063730
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking
NASA Astrophysics Data System (ADS)
Aimi, Takeshi; Imada, Masatoshi
2007-08-01
We examine Gaussian-basis Monte Carlo (GBMC) method introduced by Corney and Drummond. This method is based on an expansion of the density-matrix operator \\hatρ by means of the coherent Gaussian-type operator basis \\hatΛ and does not suffer from the minus sign problem. The original method, however, often fails in reproducing the true ground state and causes systematic errors of calculated physical quantities because the samples are often trapped in some metastable or symmetry broken states. To overcome this difficulty, we combine the quantum-number projection scheme proposed by Assaad, Werner, Corboz, Gull, and Troyer in conjunction with the importance sampling of the original GBMC method. This improvement allows us to carry out the importance sampling in the quantum-number-projected phase-space. Some comparisons with the previous quantum-number projection scheme indicate that, in our method, the convergence with the ground state is accelerated, which makes it possible to extend the applicability and widen the range of tractable parameters in the GBMC method. The present scheme offers an efficient practical way of computation for strongly correlated electron systems beyond the range of system sizes, interaction strengths and lattice structures tractable by other computational methods such as the quantum Monte Carlo method.
Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations
Van Siclen, Clinton D
2007-02-01
A computationally simple way to accommodate "basins" of trapping states in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
Hall, Clifford; Ji, Weixiao; Blaisten-Barojas, Estela
2014-02-01
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.
3D dose distribution calculation in a voxelized human phantom by means of Monte Carlo method.
Abella, V; Miró, R; Juste, B; Verdú, G
2010-01-01
The aim of this work is to provide the reconstruction of a real human voxelized phantom by means of a MatLab program and the simulation of the irradiation of such phantom with the photon beam generated in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, by using the Monte Carlo transport code MCNP (Monte Carlo N-Particle), version 5. The project results in 3D dose mapping calculations inside the voxelized antropomorphic head phantom. The program provides the voxelization by first processing the CT slices; the process follows a two-dimensional pixel and material identification algorithm on each slice and three-dimensional interpolation in order to describe the phantom geometry via small cubic cells, resulting in an MCNP input deck format output. Dose rates are calculated by using the MCNP5 tool FMESH, superimposed mesh tally, which gives the track length estimation of the particle flux in units of particles/cm(2). Furthermore, the particle flux is converted into dose by using the conversion coefficients extracted from the NIST Physical Reference Data. The voxelization using a three-dimensional interpolation technique in combination with the use of the FMESH tool of the MCNP Monte Carlo code offers an optimal simulation which results in 3D dose mapping calculations inside anthropomorphic phantoms. This tool is very useful in radiation treatment assessments, in which voxelized phantoms are widely utilized. PMID:19892556
A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison
NASA Astrophysics Data System (ADS)
Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.
2014-06-01
Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.
NASA Astrophysics Data System (ADS)
Jin, Shengye; Tamura, Masayuki
2013-10-01
Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is
Geochemical Characterization Using Geophysical Data and Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Chen, J.; Hubbard, S.; Rubin, Y.; Murray, C.; Roden, E.; Majer, E.
2002-12-01
if they were available from direct measurements or as variables otherwise. To estimate the geochemical parameters, we first assigned a prior model for each variable and a likelihood model for each type of data, which together define posterior probability distributions for each variable on the domain. Since the posterior probability distribution may involve hundreds of variables, we used a Markov Chain Monte Carlo (MCMC) method to explore each variable by generating and subsequently evaluating hundreds of realizations. Results from this case study showed that although geophysical attributes are not necessarily directly related to geochemical parameters, geophysical data could be very useful for providing accurate and high-resolution information about geochemical parameter distribution through their joint and indirect connections with hydrogeological properties such as lithofacies. This case study also demonstrated that MCMC methods were particularly useful for geochemical parameter estimation using geophysical data because they allow incorporation into the procedure of spatial correlation information, measurement errors, and cross correlations among different types of parameters.
Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo
2016-01-01
Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduc-tion of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine. PMID:27074456
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
NASA Astrophysics Data System (ADS)
Sadovich, Sergey; Talamo, A.; Burnos, V.; Kiyavitskaya, H.; Fokov, Yu.
2014-06-01
In subcritical systems driven by an external neutron source, the experimental methods based on pulsed neutron source and statistical techniques play an important role for reactivity measurement. Simulation of these methods is very time-consumed procedure. For simulations in Monte-Carlo programs several improvements for neutronic calculations have been made. This paper introduces a new method for simulation PNS and statistical measurements. In this method all events occurred in the detector during simulation are stored in a file using PTRAC feature in the MCNP. After that with a special code (or post-processing) PNS and statistical methods can be simulated. Additionally different shapes of neutron pulses and its lengths as well as dead time of detectors can be included into simulation. The methods described above were tested on subcritical assembly Yalina-Thermal, located in Joint Institute for Power and Nuclear Research SOSNY, Minsk, Belarus. A good agreement between experimental and simulated results was shown.
NASA Astrophysics Data System (ADS)
Nourazar, S. S.; Jahangiri, P.; Aboutalebi, A.; Ganjaei, A. A.; Nourazar, M.; Khadem, J.
2011-06-01
The effect of new terms in the improved algorithm, the modified direct simulation Monte-Carlo (MDSMC) method, is investigated by simulating a rarefied binary gas mixture flow inside a rotating cylinder. Dalton law for the partial pressures contributed by each species of the binary gas mixture is incorporated into our simulation using the MDSMC method and the direct simulation Monte-Carlo (DSMC) method. Moreover, the effect of the exponent of the cosine of deflection angle (α) in the inter-molecular collision models, the variable soft sphere (VSS) and the variable hard sphere (VHS), is investigated in our simulation. The improvement of the results of simulation is pronounced using the MDSMC method when compared with the results of the DSMC method. The results of simulation using the VSS model show some improvements on the result of simulation for the mixture temperature at radial distances close to the cylinder wall where the temperature reaches the maximum value when compared with the results using the VHS model.
NASA Astrophysics Data System (ADS)
Jehan, Musarrat
The response of a dynamic system is random. There is randomness in both the applied loads and the strength of the system. Therefore, to account for the uncertainty, the safety of the system must be quantified using its probability of survival (reliability). Monte Carlo Simulation (MCS) is a widely used method for probabilistic analysis because of its robustness. However, a challenge in reliability assessment using MCS is that the high computational cost limits the accuracy of MCS. Haftka et al. [2010] developed an improved sampling technique for reliability assessment called separable Monte Carlo (SMC) that can significantly increase the accuracy of estimation without increasing the cost of sampling. However, this method was applied to time-invariant problems involving two random variables only. This dissertation extends SMC to random vibration problems with multiple random variables. This research also develops a novel method for estimation of the standard deviation of the probability of failure of a structure under static or random vibration. The method is demonstrated on quarter car models and a wind turbine. The proposed method is validated using repeated standard MCS.
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Ren, Wei-Xin; Perera, Ricardo
2012-11-01
Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties, instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.
Calculation of images from an anthropomorphic chest phantom using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Ullman, Gustaf; Malusek, Alexandr; Sandborg, Michael; Dance, David R.; Alm Carlsson, Gudrun
2006-03-01
Monte Carlo (MC) computer simulation of chest x-ray imaging systems has hitherto been performed using anthropomorphic phantoms with too large (3 mm) voxel sizes. The aim for this work was to develop and use a Monte Carlo computer program to compute projection x-ray images of a high-resolution anthropomorphic voxel phantom for visual clinical image quality evaluation and dose-optimization. An Alderson anthropomorphic chest phantom was imaged in a CT-scanner and reconstructed with isotropic voxels of 0.7 mm. The phantom was segmented and included in a Monte Carlo computer program using the collision density estimator to derive the energies imparted to the detector per unit area of each pixel by scattered photons. The image due to primary photons was calculated analytically including a pre-calculated detector response function. Attenuation and scatter of x-rays in the phantom, grid and image detector was considered. Imaging conditions (tube voltage, anti-scatter device) were varied and the images compared to a real computed radiography (Fuji FCR 9501) image. Four imaging systems were simulated (two tube voltages 81 kV and 141 kV using either a grid with ratio 10 or a 30 cm air gap). The effect of scattered radiation on the visibility of thoracic vertebrae against the heart and lungs is demonstrated. The simplicity in changing the imaging conditions will allow us not only to produce images of existing imaging systems, but also of hypothetical, future imaging systems. We conclude that the calculated images of the high-resolution voxel phantom are suitable for human detection experiments of low-contrast lesions.
NASA Astrophysics Data System (ADS)
Alavirad, Hamzeh; Malekjani, Mohammad
2014-02-01
We constrain holographic dark energy (HDE) with time varying gravitational coupling constant in the framework of the modified Friedmann equations using cosmological data from type Ia supernovae, baryon acoustic oscillations, cosmic microwave background radiation and X-ray gas mass fraction. Applying a Markov Chain Monte Carlo (MCMC) simulation, we obtain the best fit values of the model and cosmological parameters within 1 σ confidence level (CL) in a flat universe as: , , and the HDE constant . Using the best fit values, the equation of state of the dark component at the present time w d0 at 1 σ CL can cross the phantom boundary w=-1.
Davis, Mitchell A; Dunn, Andrew K
2015-06-29
Few methods exist that can accurately handle dynamic light scattering in the regime between single and highly multiple scattering. We demonstrate dynamic light scattering Monte Carlo (DLS-MC), a numerical method by which the electric field autocorrelation function may be calculated for arbitrary geometries if the optical properties and particle motion are known or assumed. DLS-MC requires no assumptions regarding the number of scattering events, the final form of the autocorrelation function, or the degree of correlation between scattering events. Furthermore, the method is capable of rapidly determining the effect of particle motion changes on the autocorrelation function in heterogeneous samples. We experimentally validated the method and demonstrated that the simulations match both the expected form and the experimental results. We also demonstrate the perturbation capabilities of the method by calculating the autocorrelation function of flow in a representation of mouse microvasculature and determining the sensitivity to flow changes as a function of depth. PMID:26191723
Tang, Ke; Zhang, Jinfeng; Liang, Jie
2014-01-01
Loops in proteins are flexible regions connecting regular secondary structures. They are often involved in protein functions through interacting with other molecules. The irregularity and flexibility of loops make their structures difficult to determine experimentally and challenging to model computationally. Conformation sampling and energy evaluation are the two key components in loop modeling. We have developed a new method for loop conformation sampling and prediction based on a chain growth sequential Monte Carlo sampling strategy, called Distance-guided Sequential chain-Growth Monte Carlo (DiSGro). With an energy function designed specifically for loops, our method can efficiently generate high quality loop conformations with low energy that are enriched with near-native loop structures. The average minimum global backbone RMSD for 1,000 conformations of 12-residue loops is Å, with a lowest energy RMSD of Å, and an average ensemble RMSD of Å. A novel geometric criterion is applied to speed up calculations. The computational cost of generating 1,000 conformations for each of the x loops in a benchmark dataset is only about cpu minutes for 12-residue loops, compared to ca cpu minutes using the FALCm method. Test results on benchmark datasets show that DiSGro performs comparably or better than previous successful methods, while requiring far less computing time. DiSGro is especially effective in modeling longer loops (– residues). PMID:24763317
Hart, Vern P; Doyle, Timothy E
2013-09-01
A Monte Carlo method was derived from the optical scattering properties of spheroidal particles and used for modeling diffuse photon migration in biological tissue. The spheroidal scattering solution used a separation of variables approach and numerical calculation of the light intensity as a function of the scattering angle. A Monte Carlo algorithm was then developed which utilized the scattering solution to determine successive photon trajectories in a three-dimensional simulation of optical diffusion and resultant scattering intensities in virtual tissue. Monte Carlo simulations using isotropic randomization, Henyey-Greenstein phase functions, and spherical Mie scattering were additionally developed and used for comparison to the spheroidal method. Intensity profiles extracted from diffusion simulations showed that the four models differed significantly. The depth of scattering extinction varied widely among the four models, with the isotropic, spherical, spheroidal, and phase function models displaying total extinction at depths of 3.62, 2.83, 3.28, and 1.95 cm, respectively. The results suggest that advanced scattering simulations could be used as a diagnostic tool by distinguishing specific cellular structures in the diffused signal. For example, simulations could be used to detect large concentrations of deformed cell nuclei indicative of early stage cancer. The presented technique is proposed to be a more physical description of photon migration than existing phase function methods. This is attributed to the spheroidal structure of highly scattering mitochondria and elongation of the cell nucleus, which occurs in the initial phases of certain cancers. The potential applications of the model and its importance to diffusive imaging techniques are discussed. PMID:24085080
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
NASA Astrophysics Data System (ADS)
Hall, Clifford; Ji, Weixiao; Blaisten-Barojas, Estela
2014-02-01
We present a CPU-GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU-GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU-GPU duets.
Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method
NASA Astrophysics Data System (ADS)
Moralles, M.; Guimarães, C. C.; Okuno, E.
2005-06-01
Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF 2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the range 20-300 kV, were obtained by simulating a X-ray Philips MG-450 tube associated with the recommended filters. A realistic photon distribution of a 60Co radiotherapy source was taken from results of Monte Carlo simulations found in the literature. Comparison between simulated and experimental results revealed that the attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account, while this effect is negligible for lithium fluoride. Differences between results obtained by heating the dosimeter from the irradiated side and from the opposite side allowed the determination of the light attenuation coefficient for CaF 2:NaCl (mass proportion 60:40) as 2.2 mm -1.
Xu, Feng; Davis, Anthony B; West, Robert A; Esposito, Larry W
2011-01-17
Building on the Markov chain formalism for scalar (intensity only) radiative transfer, this paper formulates the solution to polarized diffuse reflection from and transmission through a vertically inhomogeneous atmosphere. For verification, numerical results are compared to those obtained by the Monte Carlo method, showing deviations less than 1% when 90 streams are used to compute the radiation from two types of atmospheres, pure Rayleigh and Rayleigh plus aerosol, when they are divided into sublayers of optical thicknesses of less than 0.03. PMID:21263634
NASA Astrophysics Data System (ADS)
Starukhin, P. Yu.; Klinaev, Yu. V.
2011-05-01
We present the results of numerical modeling of passage of ultrashort laser pulses through an inhomogeneous layered medium with moving scatterers, based on solution of the nonsteady-state radiation transport equation by the Monte Carlo method. We consider the effects of Doppler broadening of the backscattered radiation spectrum in biological tissues. We have analyzed the dynamics of propagation of a short laser pulse within a multilayer model of human skin. We have studied the possibilities for tomography of different layers of biological tissue based on analysis of the spectrum of the scattered radiation pulse.
NASA Astrophysics Data System (ADS)
Khisamutdinov, A. I.; Velker, N. N.
2014-05-01
The talk examines a system of pairwise interaction particles, which models a rarefied gas in accordance with the nonlinear Boltzmann equation, the master equations of Markov evolution of this system and corresponding numerical Monte Carlo methods. Selection of some optimal method for simulation of rarefied gas dynamics depends on the spatial size of the gas flow domain. For problems with the Knudsen number Kn of order unity "imitation", or "continuous time", Monte Carlo methods ([2]) are quite adequate and competitive. However if Kn <= 0.1 (the large sizes), excessive punctuality, namely, the need to see all the pairs of particles in the latter, leads to a significant increase in computational cost(complexity). We are interested in to construct the optimal methods for Boltzmann equation problems with large enough spatial sizes of the flow. Speaking of the optimal, we mean that we are talking about algorithms for parallel computation to be implemented on high-performance multi-processor computers. The characteristic property of large systems is the weak dependence of sub-parts of each other at a sufficiently small time intervals. This property is taken into account in the approximate methods using various splittings of operator of corresponding master equations. In the paper, we develop the approximate method based on the splitting of the operator of master equations system "over groups of particles" ([7]). The essence of the method is that the system of particles is divided into spatial subparts which are modeled independently for small intervals of time, using the precise"imitation" method. The type of splitting used is different from other well-known type "over collisions and displacements", which is an attribute of the known Direct simulation Monte Carlo methods. The second attribute of the last ones is the grid of the "interaction cells", which is completely absent in the imitation methods. The main of talk is parallelization of the imitation algorithms with
NASA Astrophysics Data System (ADS)
Mishra, S.; Schwab, Ch.; Šukys, J.
2016-05-01
We consider the very challenging problem of efficient uncertainty quantification for acoustic wave propagation in a highly heterogeneous, possibly layered, random medium, characterized by possibly anisotropic, piecewise log-exponentially distributed Gaussian random fields. A multi-level Monte Carlo finite volume method is proposed, along with a novel, bias-free upscaling technique that allows to represent the input random fields, generated using spectral FFT methods, efficiently. Combined together with a recently developed dynamic load balancing algorithm that scales to massively parallel computing architectures, the proposed method is able to robustly compute uncertainty for highly realistic random subsurface formations that can contain a very high number (millions) of sources of uncertainty. Numerical experiments, in both two and three space dimensions, illustrating the efficiency of the method are presented.
Ma, L X; Wang, F Q; Wang, C A; Wang, C C; Tan, J Y
2015-11-20
Spectral properties of sea foam greatly affect ocean color remote sensing and aerosol optical thickness retrieval from satellite observation. This paper presents a combined Mie theory and Monte Carlo method to investigate visible and near-infrared spectral reflectance and bidirectional reflectance distribution function (BRDF) of sea foam layers. A three-layer model of the sea foam is developed in which each layer is composed of large air bubbles coated with pure water. A pseudo-continuous model and Mie theory for coated spheres is used to determine the effective radiative properties of sea foam. The one-dimensional Cox-Munk surface roughness model is used to calculate the slope density functions of the wind-blown ocean surface. A Monte Carlo method is used to solve the radiative transfer equation. Effects of foam layer thickness, bubble size, wind speed, solar zenith angle, and wavelength on the spectral reflectance and BRDF are investigated. Comparisons between previous theoretical results and experimental data demonstrate the feasibility of our proposed method. Sea foam can significantly increase the spectral reflectance and BRDF of the sea surface. The absorption coefficient of seawater near the surface is not the only parameter that influences the spectral reflectance. Meanwhile, the effects of bubble size, foam layer thickness, and solar zenith angle also cannot be obviously neglected. PMID:26836550
NASA Astrophysics Data System (ADS)
Guo, Shi; Zhu, Minyi; Hu, Shuming; Mitas, Lubos
2013-03-01
Very recently, a quantum Monte Carlo (QMC) method was proposed for Rashba spin-orbit operators which expands the applicability of QMC to systems with variable spins. It is based on incorporating the spin-orbit into the Green's function and thus samples (ie, rotates) the spinors in the antisymmetric part of the trial function [1]. Here we propose a new alternative for both variational and diffusion Monte Carlo algorithms for calculations of systems with variable spins. Specifically, we introduce a new spin representation which allows us to sample the spin configurations efficiently and without introducing additional fluctuations. We develop the corresponding Green's function which treats the electron spin as a dynamical variable and we use the fixed-phase approximation to eliminate the negative probabilities. The trial wave function is a Slater determinant of spinors and spin-indepedent Jastrow correlations. The method also has the zero variance property. We benchmark the method on the 2D electron gas with the Rashba interaction and we find very good overall agreement with previously obtained results. Research supported by NSF and ARO.
A new method to calculate the response of the WENDI-II rem counter using the FLUKA Monte Carlo Code
NASA Astrophysics Data System (ADS)
Jägerhofer, Lukas; Feldbaumer, Eduard; Theis, Christian; Roesler, Stefan; Vincke, Helmut
2012-11-01
The FHT-762 WENDI-II is a commercially available wide range neutron rem counter which uses a 3He counter tube inside a polyethylene moderator. To increase the response above 10 MeV of kinetic neutron energy, a layer of tungsten powder is implemented into the moderator shell. For the purpose of the characterization of the response, a detailed model of the detector was developed and implemented for FLUKA Monte Carlo simulations. In common practice Monte Carlo simulations are used to calculate the neutron fluence inside the active volume of the detector. The resulting fluence is then folded offline with the reaction rate of the 3He(n,p)3H reaction to yield the proton-triton production rate. Consequently this approach does not consider geometrical effects like wall effects, where one or both reaction products leave the active volume of the detector without triggering a count. This work introduces a two-step simulation method which can be used to determine the detector's response, including geometrical effects, directly, using Monte Carlo simulations. A "first step" simulation identifies the 3He(n,p)3H reaction inside the active volume of the 3He counter tube and records its position. In the "second step" simulation the tritons and protons are started in accordance with the kinematics of the 3He(n,p)3H reaction from the previously recorded positions and a correction factor for geometrical effects is determined. The three dimensional Monte Carlo model of the detector as well as the two-step simulation method were evaluated and tested in the well-defined fields of an 241Am-Be(α,n) source as well as in the field of a 252Cf source. Results were compared with measurements performed by Gutermuth et al. [1] at GSI with an 241Am-Be(α,n) source as well as with measurements performed by the manufacturer in the field of a 252Cf source. Both simulation results show very good agreement with the respective measurements. After validating the method, the response values in terms of
A Markov-Chain Monte-Carlo Based Method for Flaw Detection in Beams
Glaser, R E; Lee, C L; Nitao, J J; Hickling, T L; Hanley, W G
2006-09-28
A Bayesian inference methodology using a Markov Chain Monte Carlo (MCMC) sampling procedure is presented for estimating the parameters of computational structural models. This methodology combines prior information, measured data, and forward models to produce a posterior distribution for the system parameters of structural models that is most consistent with all available data. The MCMC procedure is based upon a Metropolis-Hastings algorithm that is shown to function effectively with noisy data, incomplete data sets, and mismatched computational nodes/measurement points. A series of numerical test cases based upon a cantilever beam is presented. The results demonstrate that the algorithm is able to estimate model parameters utilizing experimental data for the nodal displacements resulting from specified forces.
Ground-state properties of LiH by reptation quantum Monte Carlo methods.
Ospadov, Egor; Oblinsky, Daniel G; Rothstein, Stuart M
2011-05-01
We apply reptation quantum Monte Carlo to calculate one- and two-electron properties for ground-state LiH, including all tensor components for static polarizabilities and hyperpolarizabilities to fourth-order in the field. The importance sampling is performed with a large (QZ4P) STO basis set single determinant, directly obtained from commercial software, without incurring the overhead of optimizing many-parameter Jastrow-type functions of the inter-electronic and internuclear distances. We present formulas for the electrical response properties free from the finite-field approximation, which can be problematic for the purposes of stochastic estimation. The α, γ, A and C polarizability values are reasonably consistent with recent determinations reported in the literature, where they exist. A sum rule is obeyed for components of the B tensor, but B(zz,zz) as well as β(zzz) differ from what was reported in the literature. PMID:21445452
Cosmic ray ionization and dose at Mars: Benchmarking deterministic and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Norman, R. B.; Gronoff, G.; Mertens, C. J.
2014-12-01
The ability to evaluate the cosmic ray environment at Mars is of interest for future manned exploration. To support exploration, tools must be developed to accurately access the radiation environment in both free space and on planetary surfaces. The primary tool NASA uses to quantify radiation exposure behind shielding materials is the space radiation transport code, HZETRN. In order to build confidence in HZETRN, code benchmarking against Monte Carlo radiation transport codes is often used. This work compares the dose calculations at Mars by HZETRN and the GEANT4 application, Planetocosmics. The dose at ground and the energy deposited in the atmosphere by galactic cosmic ray protons and alpha particles has been calculated for the Curiosity landing conditions. In addition, this work has considered Solar Energetic Particle events, which allows for a better understanding of the spectral form in the comparison. The results for protons and alpha particles show very good agreement between HZETRN and Planetocosmics.
Torsional path integral Monte Carlo method for the quantum simulation of large molecules
NASA Astrophysics Data System (ADS)
Miller, Thomas F.; Clary, David C.
2002-05-01
A molecular application is introduced for calculating quantum statistical mechanical expectation values of large molecules at nonzero temperatures. The Torsional Path Integral Monte Carlo (TPIMC) technique applies an uncoupled winding number formalism to the torsional degrees of freedom in molecular systems. The internal energy of the molecules ethane, n-butane, n-octane, and enkephalin are calculated at standard temperature using the TPIMC technique and compared to the expectation values obtained using the harmonic oscillator approximation and a variational technique. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the TPIMC technique. The harmonic oscillator approximation approach to calculating the internal energy performs well for the molecules presented in this study but is limited by its neglect of both anharmonicity effects and the potential coupling of intramolecular torsions.
Anti-confocal versus confocal assessment of the middle ear simulated by Monte Carlo methods.
Jung, David S; Crowe, John A; Birchall, John P; Somekh, Michael G; See, Chung W
2015-10-01
The ability to monitor the inflammatory state of the middle ear mucosa would provide clinical utility. To enable spectral measurements on the mucosa whilst rejecting background signal from the eardrum an anti-confocal system is investigated. In contrast to the central pinhole in a confocal system the anti-confocal system uses a central stop to reject light from the in-focus plane, the eardrum, with all other light detected. Monte Carlo simulations of this system show an increase in detected signal and improved signal-to-background ratio compared to a conventional confocal set-up used to image the middle ear mucosa. System parameters are varied in the simulation and their influence on the level of background rejection are presented. PMID:26504633
Supernova Spectrum Synthesis for 3D Composition Models with the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Thomas, Rollin
2002-07-01
newcommandBruteextttBrute Relying on spherical symmetry when modelling supernova spectra is clearly at best a good approximation. Recent polarization measurements, interesting features in flux spectra, and the clumpy textures of supernova remnants suggest that supernova envelopes are rife with fine structure. To account for this fine structure and create a complete picture of supernovae, new 3D explosion models will be forthcoming. To reconcile these models with observed spectra, 3D radiative transfer will be necessary. We propose a 3D Monte Carlo radiative transfer code, Brute, and improvements that will move it toward a fully self-consistent 3D transfer code. Spectroscopic HST observations of supernovae past, present and future will definitely benefit. Other 3D transfer problems of interest to HST users like AGNs will benefit from the techniques developed.
Martin, W.R.
1993-01-01
This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. A alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.
Benchmark study of the two-dimensional Hubbard model with auxiliary-field quantum Monte Carlo method
NASA Astrophysics Data System (ADS)
Qin, Mingpu; Shi, Hao; Zhang, Shiwei
2016-08-01
Ground-state properties of the Hubbard model on a two-dimensional square lattice are studied by the auxiliary-field quantum Monte Carlo method. Accurate results for energy, double occupancy, effective hopping, magnetization, and momentum distribution are calculated for interaction strengths of U /t from 2 to 8, for a range of densities including half-filling and n =0.3 ,0.5 ,0.6 ,0.75 , and 0.875 . At half-filling, the results are numerically exact. Away from half-filling, the constrained path Monte Carlo method is employed to control the sign problem. Our results are obtained with several advances in the computational algorithm, which are described in detail. We discuss the advantages of generalized Hartree-Fock trial wave functions and its connection to pairing wave functions, as well as the interplay with different forms of Hubbard-Stratonovich decompositions. We study the use of different twist angle sets when applying the twist averaged boundary conditions. We propose the use of quasirandom sequences, which improves the convergence to the thermodynamic limit over pseudorandom and other sequences. With it and a careful finite size scaling analysis, we are able to obtain accurate values of ground-state properties in the thermodynamic limit. Detailed results for finite-sized systems up to 16 ×16 are also provided for benchmark purposes.
Rijken, J D; Harris-Phillips, W; Lawson, J M
2015-03-01
Lithium fluoride thermoluminescent dosimeters (TLDs) exhibit a dependence on the energy of the radiation beam of interest so need to be carefully calibrated for different energy spectra if used for clinical radiation oncology beam dosimetry and quality assurance. TLD energy response was investigated for a specific set of TLD700:LiF(Mg,Ti) chips for a high dose rate (192)Ir brachytherapy source. A novel method of energy response calculation for (192)Ir was developed where dose was determined through Monte Carlo modelling in Geant4. The TLD response was then measured experimentally. Results showed that TLD700 has a depth dependent response in water ranging from 1.170 ± 0.125 at 20 mm to 0.976 ± 0.043 at 50 mm (normalised to a nominal 6 MV beam response). The method of calibration and Monte Carlo data developed through this study could be easily applied by other Medical Physics departments seeking to use TLDs for (192)Ir patient dosimetry or treatment planning system experimental verification. PMID:25663432
Williams, Michael S; Ebel, Eric D
2014-11-18
The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the
Dupuis, Paul
2014-03-14
This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange
Hula, Andreas; Montague, P. Read; Dayan, Peter
2015-01-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent’s preference for equity with their partner, beliefs about the partner’s appetite for equity, beliefs about the partner’s model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429
Nease, Brian R. Ueki, Taro
2009-12-10
A time series approach has been applied to the nuclear fission source distribution generated by Monte Carlo (MC) particle transport in order to calculate the non-fundamental mode eigenvalues of the system. The novel aspect is the combination of the general technical principle of projection pursuit for multivariate data with the neutron multiplication eigenvalue problem in the nuclear engineering discipline. Proof is thoroughly provided that the stationary MC process is linear to first order approximation and that it transforms into one-dimensional autoregressive processes of order one (AR(1)) via the automated choice of projection vectors. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern MC codes for nuclear criticality calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. This time series approach was tested for a variety of problems including multi-dimensional ones. Numerical results show that the time series approach has strong potential for three dimensional whole reactor core. The eigenvalue ratio can be updated in an on-the-fly manner without storing the nuclear fission source distributions at all previous iteration cycles for the mean subtraction. Lastly, the effects of degenerate eigenvalues are investigated and solutions are provided.
NASA Astrophysics Data System (ADS)
McNab, Walt W.
2001-02-01
Biotransformation of dissolved groundwater hydrocarbon plumes emanating from leaking underground fuel tanks should, in principle, result in plume length stabilization over relatively short distances, thus diminishing the environmental risk. However, because the behavior of hydrocarbon plumes is usually poorly constrained at most leaking underground fuel tank sites in terms of release history, groundwater velocity, dispersion, as well as the biotransformation rate, demonstrating such a limitation in plume length is problematic. Biotransformation signatures in the aquifer geochemistry, most notably elevated bicarbonate, may offer a means of constraining the relationship between plume length and the mean biotransformation rate. In this study, modeled plume lengths and spatial bicarbonate differences among a population of synthetic hydrocarbon plumes, generated through Monte Carlo simulation of an analytical solute transport model, are compared to field observations from six underground storage tank (UST) sites at military bases in California. Simulation results indicate that the relationship between plume length and the distribution of bicarbonate is best explained by biotransformation rates that are consistent with ranges commonly reported in the literature. This finding suggests that bicarbonate can indeed provide an independent means for evaluating limitations in hydrocarbon plume length resulting from biotransformation.
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials. The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.
A Monte Carlo boundary propagation method for the solution of Poisson's equation
Hiromoto, R.; Brickner, R.G.
1990-01-01
To often the parallelism of a computational algorithm is used (or advertised) as a desirable measure of its performance. That is, the higher the computational parallelism the better the expected performance. With the current interest and emphasis on massively parallel computer systems, the notion of highly parallel algorithms is the subject of many conferences and funding proposals. Unfortunately, the revolution'' that this vision promises has served to further complicate the measure of parallel performance by the introduction of such notions as scaled speedup and scalable systems. As a counter example to the merits of highly parallel algorithms whose parallelism scales linearly with increasing problem size, we introduce a slight modification to a highly parallel Monte Carlo technique that is used to estimate the solution of Poisson's equation. This simple modification is shown to yield a much better estimate to the solution by incorporating a more efficient use of boundary data (Dirichlet boundary conditions). A by product of this new algorithm is a much more efficient sequential algorithm but at the expense of sacrificing parallelism. 3 refs.
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange.
Hula, Andreas; Montague, P Read; Dayan, Peter
2015-06-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent's preference for equity with their partner, beliefs about the partner's appetite for equity, beliefs about the partner's model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials.more » The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.« less
The use of Monte Carlo methods in heavy charged particle radiation therapy.
NASA Astrophysics Data System (ADS)
Paganetti, Harald
2007-03-01
This presentation will demonstrate the importance of Monte Carlo (MC) simulations in proton therapy. MC applications will be shown which aim at 1. Modeling of the beam delivery system. MC can be used for quality assurance verification in order to understand the sensitivity of beam characteristics and how these influence the dose delivered. 2. Patient treatment dose verification. The capability of reading CT information has to be implemented into the MC code. Simulating the ionization chamber reading in the treatment head allows the dose to be specified for treatment plan verification. 3. 4D dose calculation. The patient geometry may be time dependent due to respiratory or cardiac motion. To consider this, patient specific 4D CT data can be used in combination with MC simulations. 4. Simulating positron emission. Positron emitters are produced via nuclear interactions along the beam path penetration and can be detected after treatment. Comparison between measured and MC simulated PET images can provide feedback on the intended dose in the patient. 5. Studies on radiation induced cancer risk. MC calculations based on computational anthropomorphic phantoms allow the estimation of organ dose and particle energy distributions everywhere in the patient.
Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method
NASA Astrophysics Data System (ADS)
Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin
2015-07-01
The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)
Dose optimization in 125I permanent prostate seed implants using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Reis, Juraci P.; Menezes, Artur F.; Souza, Edmilson M.; Facure, Alessandro; Medeiros, Jose A. C. C.; Silva, Ademir X.
2012-04-01
The aim of this work consisted in using the Monte Carlo code MCNP and computational phantoms to assess the absorbed dose distributions in the prostate, due to a radiotherapy treatment using 125I radioactive seeds. The intention was to develop a tool that can serve as a complement of the treatment planning system of radiotherapy procedures, reproducing accurately the exact geometry of the sources and the composition of the media where the seeds are inserted. The radiation activities of the simulated seeds varied from 0.27 mCi to 0.38 mCi, for hypothetical treatments employing 80, 88 or 100 125I sources, typical parameters for this technique. The prostate volumes where the seeds were virtually inserted were simulated with spherical or voxel computational phantoms. The configuration containing 88 seeds with initial radiation activity of 0.27 mCi resulted in a final absorbed dose near 144 Gy, in accordance with the recommendations of the American Association of Physicists in Medicine (AAPM). Based on this configuration, it was possible to obtain the radiation absorbed dose distributions for the voxel phantom, which allowed the determination of treatment quality indicators. The obtained results are in good agreement with experimental data presented by other authors.
Kang, Ki Mun; Jeong, Bae Kwon; Choi, Hoon Sik; Song, Jin Ho; Park, Byung-Do; Lim, Young Kyung; Jeong, Hojin
2016-03-15
This study was aimed to evaluate the effectiveness of Monte Carlo (MC) method in stereotactic radiotherapy for brain tumor. The difference in doses predicted by the conventional Ray-tracing (Ray) and the advanced MC algorithms was comprehensively investigated through the simulations for phantom and patient data, actual measurement of dose distribution, and the retrospective analysis of 77 brain tumors patients. These investigations consistently showed that the MC algorithm overestimated the dose than the Ray algorithm and the MC overestimation was generally increased as decreasing the beams size and increasing the number of beams delivered. These results demonstrated that the advanced MC algorithm would be inaccurate than the conventional Raytracing algorithm when applied to a (quasi-) homogeneous brain tumors. Thus, caution may be needed to apply the MC method to brain radiosurgery or radiotherapy. PMID:26871473
Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
NASA Astrophysics Data System (ADS)
Densmore, J. D.; Park, H.; Wollaber, A. B.; Rauenzahn, R. M.; Knoll, D. A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption-emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck-Cummings algorithm.
Zhang, Zhe; Schindler, Christina E M; Lange, Oliver F; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
Kang, Ki Mun; Jeong, Bae Kwon; Choi, Hoon Sik; Song, Jin Ho; Park, Byung-Do; Lim, Young Kyung; Jeong, Hojin
2016-01-01
This study was aimed to evaluate the effectiveness of Monte Carlo (MC) method in stereotactic radiotherapy for brain tumor. The difference in doses predicted by the conventional Ray-tracing (Ray) and the advanced MC algorithms was comprehensively investigated through the simulations for phantom and patient data, actual measurement of dose distribution, and the retrospective analysis of 77 brain tumors patients. These investigations consistently showed that the MC algorithm overestimated the dose than the Ray algorithm and the MC overestimation was generally increased as decreasing the beams size and increasing the number of beams delivered. These results demonstrated that the advanced MC algorithm would be inaccurate than the conventional Raytracing algorithm when applied to a (quasi-) homogeneous brain tumors. Thus, caution may be needed to apply the MC method to brain radiosurgery or radiotherapy. PMID:26871473
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.
NASA Astrophysics Data System (ADS)
Ma, C. Y.; Zhao, J. M.; Liu, L. H.; Zhang, L.; Li, X. C.; Jiang, B. C.
2016-03-01
Inverse identification of radiative properties of participating media is usually time consuming. In this paper, a GPU accelerated inverse identification model is presented to obtain the radiative properties of particle suspensions. The sample medium is placed in a cuvette and a narrow light beam is irradiated normally from the side. The forward three-dimensional radiative transfer problem is solved using a massive parallel Monte Carlo method implemented on graphics processing unit (GPU), and particle swarm optimization algorithm is applied to inversely identify the radiative properties of particle suspensions based on the measured bidirectional scattering distribution function (BSDF). The GPU-accelerated Monte Carlo simulation significantly reduces the solution time of the radiative transfer simulation and hence greatly accelerates the inverse identification process. Hundreds of speedup is achieved as compared to the CPU implementation. It is demonstrated using both simulated BSDF and experimentally measured BSDF of microalgae suspensions that the radiative properties of particle suspensions can be effectively identified based on the GPU-accelerated algorithm with three-dimensional radiative transfer modelling.
Zhang, Yong; Chen, Bin; Li, Dong
2016-04-01
To investigate the influence of polarization on the polarized light propagation in biological tissue, a polarized geometric Monte Carlo method is developed. The Stokes-Mueller formalism is expounded to describe the shifting of light polarization during propagation events, including scattering and interface interaction. The scattering amplitudes and optical parameters of different tissue structures are obtained using Mie theory. Through simulations of polarized light (pulsed dye laser at wavelength of 585 nm) propagation in an infinite slab tissue model and a discrete vessel tissue model, energy depositions in tissue structures are calculated and compared with those obtained through general geometric Monte Carlo simulation under the same parameters but without consideration of polarization effect. It is found that the absorption depth of the polarized light is about one half of that determined by conventional simulations. In the discrete vessel model, low penetrability manifests in three aspects: diffuse reflection became the main contributor to the energy escape, the proportion of epidermal energy deposition increased significantly, and energy deposition in the blood became weaker and more uneven. This may indicate that the actual thermal damage of epidermis during the real-world treatment is higher and the deep buried blood vessels are insufficiently damaged by consideration of polarization effect, compared with the conventional prediction. PMID:27139673
Lillhök, J E; Grindborg, J-E; Lindborg, L; Gudowska, I; Carlsson, G Alm; Söderberg, J; Kopeć, M; Medin, J
2007-08-21
Nanodosimetric single-event distributions or their mean values may contribute to a better understanding of how radiation induced biological damages are produced. They may also provide means for radiation quality characterization in therapy beams. Experimental nanodosimetry is however technically challenging and Monte Carlo simulations are valuable as a complementary tool for such investigations. The dose-mean lineal energy was determined in a therapeutic p(65)+Be neutron beam and in a (60)Co gamma beam using low-pressure gas detectors and the variance-covariance method. The neutron beam was simulated using the condensed history Monte Carlo codes MCNPX and SHIELD-HIT. The dose-mean lineal energy was calculated using the simulated dose and fluence spectra together with published data from track-structure simulations. A comparison between simulated and measured results revealed some systematic differences and different dependencies on the simulated object size. The results show that both experimental and theoretical approaches are needed for an accurate dosimetry in the nanometer region. In line with previously reported results, the dose-mean lineal energy determined at 10 nm was shown to be related to clinical RBE values in the neutron beam and in a simulated 175 MeV proton beam as well. PMID:17671346
NASA Astrophysics Data System (ADS)
Lillhök, J. E.; Grindborg, J.-E.; Lindborg, L.; Gudowska, I.; Alm Carlsson, G.; Söderberg, J.; Kopeć, M.; Medin, J.
2007-08-01
Nanodosimetric single-event distributions or their mean values may contribute to a better understanding of how radiation induced biological damages are produced. They may also provide means for radiation quality characterization in therapy beams. Experimental nanodosimetry is however technically challenging and Monte Carlo simulations are valuable as a complementary tool for such investigations. The dose-mean lineal energy was determined in a therapeutic p(65)+Be neutron beam and in a 60Co γ beam using low-pressure gas detectors and the variance-covariance method. The neutron beam was simulated using the condensed history Monte Carlo codes MCNPX and SHIELD-HIT. The dose-mean lineal energy was calculated using the simulated dose and fluence spectra together with published data from track-structure simulations. A comparison between simulated and measured results revealed some systematic differences and different dependencies on the simulated object size. The results show that both experimental and theoretical approaches are needed for an accurate dosimetry in the nanometer region. In line with previously reported results, the dose-mean lineal energy determined at 10 nm was shown to be related to clinical RBE values in the neutron beam and in a simulated 175 MeV proton beam as well.
NASA Astrophysics Data System (ADS)
Trinci, G.; Massari, R.; Scandellari, M.; Boccalini, S.; Costantini, S.; Di Sero, R.; Basso, A.; Sala, R.; Scopinaro, F.; Soluri, A.
2010-09-01
The aim of this work is to show a new scintigraphic device able to change automatically the length of its collimator in order to adapt the spatial resolution value to gamma source distance. This patented technique replaces the need for collimator change that standard gamma cameras still feature. Monte Carlo simulations represent the best tool in searching new technological solutions for such an innovative collimation structure. They also provide a valid analysis on response of gamma cameras performances as well as on advantages and limits of this new solution. Specifically, Monte Carlo simulations are realized with GEANT4 (GEometry ANd Tracking) framework and the specific simulation object is a collimation method based on separate blocks that can be brought closer and farther, in order to reach and maintain specific spatial resolution values for all source-detector distances. To verify the accuracy and the faithfulness of these simulations, we have realized experimental measurements with identical setup and conditions. This confirms the power of the simulation as an extremely useful tool, especially where new technological solutions need to be studied, tested and analyzed before their practical realization. The final aim of this new collimation system is the improvement of the SPECT techniques, with the real control of the spatial resolution value during tomographic acquisitions. This principle did allow us to simulate a tomographic acquisition of two capillaries of radioactive solution, in order to verify the possibility to clearly distinguish them.
NASA Astrophysics Data System (ADS)
Srinivasan, P.; Priya, S.; Patel, Tarun; Gopalakrishnan, R. K.; Sharma, D. N.
2015-01-01
DD/DT fusion neutron generators are used as sources of 2.5 MeV/14.1 MeV neutrons in experimental laboratories for various applications. Detailed knowledge of the radiation dose rates around the neutron generators are essential for ensuring radiological protection of the personnel involved with the operation. This work describes the experimental and Monte Carlo studies carried out in the Purnima Neutron Generator facility of the Bhabha Atomic Research Center (BARC), Mumbai. Verification and validation of the shielding adequacy was carried out by measuring the neutron and gamma dose-rates at various locations inside and outside the neutron generator hall during different operational conditions both for 2.5-MeV and 14.1-MeV neutrons and comparing with theoretical simulations. The calculated and experimental dose rates were found to agree with a maximum deviation of 20% at certain locations. This study has served in benchmarking the Monte Carlo simulation methods adopted for shield design of such facilities. This has also helped in augmenting the existing shield thickness to reduce the neutron and associated gamma dose rates for radiological protection of personnel during operation of the generators at higher source neutron yields up to 1 × 1010 n/s.
Dieudonne, C.; Dumonteil, E.; Malvagi, F.; Diop, C. M.
2013-07-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple a Monte Carlo code to simulate the neutron transport to a deterministic method that computes the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3 dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the time-expensive Monte Carlo solver called at each time step. Therefore, great improvements in term of calculation time could be expected if one could get rid of Monte Carlo transport sequences. For example, it may seem interesting to run an initial Monte Carlo simulation only once, for the first time/burnup step, and then to use the concentration perturbation capability of the Monte Carlo code to replace the other time/burnup steps (the different burnup steps are seen like perturbations of the concentrations of the initial burnup step). This paper presents some advantages and limitations of this technique and preliminary results in terms of speed up and figure of merit. Finally, we will detail different possible calculation scheme based on that method. (authors)
Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124
NASA Astrophysics Data System (ADS)
Moreau, M.; Buvat, I.; Ammour, L.; Chouin, N.; Kraeber-Bodéré, F.; Chérel, M.; Carlier, T.
2015-03-01
Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM.
Monte Carlo Method in optical diagnostics of skin and skin tissues
NASA Astrophysics Data System (ADS)
Meglinski, Igor V.
2003-12-01
A novel Monte Carlo (MC) technique for photon migration through 3D media with the spatially varying optical properties is presented. The employed MC technique combines the statistical weighting variance reduction and real photon paths tracing schemes. The overview of the results of applications of the developed MC technique in optical/near-infrared reflectance spectroscopy, confocal microscopy, fluorescence spectroscopy, OCT, Doppler flowmetry and Diffusing Wave Spectroscopy (DWS) are presented. In frame of the model skin represents as a complex inhomogeneous multi-layered medium, where the spatial distribution of blood and chromophores are variable within the depth. Taking into account variability of cells structure we represent the interfaces of skin layers as a quasi-random periodic wavy surfaces. The rough boundaries between the layers of different refractive indices play a significant role in the distribution of photons within the medium. The absorption properties of skin tissues in visible and NIR spectral region are estimated by taking into account the anatomical structure of skin as determined from histology, including the spatial distribution of blood vessels, water and melanin content. Model takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. Reasonable estimations for skin blood oxygen saturation and haematocrit are also included. The model is validated against analytic solution of the photon diffusion equation for semi-infinite homogeneous highly scattering medium. The results demonstrate that matching of the refractive index of the medium significantly improves the contrast and spatial resolution of the spatial photon sensitivity profile. It is also demonstrated that when model supplied with reasonable physical and structural parameters of biological tissues the results of skin reflectance spectra simulation
Monte Carlo methods for the simulation of positron emitters in SPECT systems
Dobrzeniecki, A.B.; Selcow, E.C.; Yanch, J.C.; Belanger, M.J.; Lu, A.; Esser, P.D.
1996-12-31
Monte Carlo simulations of nuclear medical systems are being widely used to better understand the characteristics of the acquired images. Single-photon emission computed tomography (SPECT) is an imaging modality that provides a physician with a nuclear medical image of function in an organ. In SPECT systems, the patient is injected with a radiopharmaceutical that is labeled with a photon-emitting radioisotope. A collimated gamma camera revolves around the patient, providing a series of planar images, which are then reconstructed to produce a three-dimensional image of the radiotracer distribution in the patient or phantom. The usage of positron emission computed tomography (PET) systems with {sup 18}F-labeled fluorodeoxyglucose (FDG) is an important mechanism for quantitating tumor glucose metabolism. This may facilitate the detection of primary and metastatic malignancies and the distinction between tissue regions that are normal, fibrous, necrotic, or cancerous. However, PET facilities are implemented in significantly fewer hospitals in the United States than SPECT systems. To address this issue and provide similar functional information to a clinician, there is a growing interest in imaging the 511-keV photons associated with PET agents in SPECT imaging systems. Note that the clinical utility of FDG as a diagnostic radiopharmaceutical cannot be replicated by known radiotracers emitting single photons. The authors are extending their simulations of SPECT systems to higher photon energies, where at present there is more disagreement between simulations and actual data. They discuss here possible reasons for this and steps being taken to address this discrepancy in the development of the modeling.
NASA Astrophysics Data System (ADS)
Wang, Brian; Goldstein, Moshe; Xu, X. George; Sahoo, Narayan
2005-03-01
Recently, the theoretical framework of the adjoint Monte Carlo (AMC) method has been developed using a simplified patient geometry. In this study, we extended our previous work by applying the AMC framework to a 3D anatomical model called VIP-Man constructed from the Visible Human images. First, the adjoint fluxes for the prostate (PTV) and rectum and bladder (organs at risk (OARs)) were calculated on a spherical surface of 1 m radius, centred at the centre of gravity of PTV. An importance ratio, defined as the PTV dose divided by the weighted OAR doses, was calculated for each of the available beamlets to select the beam angles. Finally, the detailed doses in PTV and OAR were calculated using a forward Monte Carlo simulation to include the electron transport. The dose information was then used to generate dose volume histograms (DVHs). The Pinnacle treatment planning system was also used to generate DVHs for the 3D plans with beam angles obtained from the AMC (3D-AMC) and a standard six-field conformal radiation therapy plan (3D-CRT). Results show that the DVHs for prostate from 3D-AMC and the standard 3D-CRT are very similar, showing that both methods can deliver prescribed dose to the PTV. A substantial improvement in the DVHs for bladder and rectum was found for the 3D-AMC method in comparison to those obtained from 3D-CRT. However, the 3D-AMC plan is less conformal than the 3D-CRT plan because only bladder, rectum and PTV are considered for calculating the importance ratios. Nevertheless, this study clearly demonstrated the feasibility of the AMC in selecting the beam directions as a part of a treatment planning based on the anatomical information in a 3D and realistic patient anatomy.
Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V
2006-12-31
Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)
NASA Astrophysics Data System (ADS)
Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.
2014-04-01
Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.
Ganesh, P.; Kim, Jeongnim; Park, Changwon; Yoon, Mina; Reboredo, Fernando A.; Kent, Paul R. C.
2014-11-03
In highly accurate diffusion quantum Monte Carlo (QMC) studies of the adsorption and diffusion of atomic lithium in AA-stacked graphite are compared with van der Waals-including density functional theory (DFT) calculations. Predicted QMC lattice constants for pure AA graphite agree with experiment. Pure AA-stacked graphite is shown to challenge many van der Waals methods even when they are accurate for conventional AB graphite. Moreover, the highest overall DFT accuracy, considering pure AA-stacked graphite as well as lithium binding and diffusion, is obtained by the self-consistent van der Waals functional vdW-DF2, although errors in binding energies remain. Empirical approaches based on point charges such as DFT-D are inaccurate unless the local charge transfer is assessed. Our results demonstrate that the lithium carbon system requires a simultaneous highly accurate description of both charge transfer and van der Waals interactions, favoring self-consistent approaches.
Ganesh, P; Kim, Jeongnim; Park, Changwon; Yoon, Mina; Reboredo, Fernando A; Kent, Paul R C
2014-12-01
Highly accurate diffusion quantum Monte Carlo (QMC) studies of the adsorption and diffusion of atomic lithium in AA-stacked graphite are compared with van der Waals-including density functional theory (DFT) calculations. Predicted QMC lattice constants for pure AA graphite agree with experiment. Pure AA-stacked graphite is shown to challenge many van der Waals methods even when they are accurate for conventional AB graphite. Highest overall DFT accuracy, considering pure AA-stacked graphite as well as lithium binding and diffusion, is obtained by the self-consistent van der Waals functional vdW-DF2, although errors in binding energies remain. Empirical approaches based on point charges such as DFT-D are inaccurate unless the local charge transfer is assessed. The results demonstrate that the lithium-carbon system requires a simultaneous highly accurate description of both charge transfer and van der Waals interactions, favoring self-consistent approaches. PMID:26583215
NASA Astrophysics Data System (ADS)
Yamamoto, Toshihiro
2014-06-01
The author of this paper recently proposed a Monte Carlo calculation algorithm to solve a complex transport equation with complex-valued weights. The algorithm enables one to generate neutron leakage-corrected group constants and anisotropic diffusion coefficients for a unit fuel pin cell or assembly. The group constants are subsequently used for multi-group deterministic core calculations. The technique, however, had some limitations in applying itself to general problems. Some improvements have been done in this paper. The reflective boundary condition has newly become available. It has been found that a cumbersome weight cancellation of fission sources with positive and negative weights can be omitted in general fuel assembly geometries. A homogenization method of diffusion coefficients for a fuel assembly has been proposed.
Ohgoe, Takahiro; Kawashima, Naoki
2011-02-15
We study the supercounterfluid (SCF) states in the two-component hard-core Bose-Hubbard model on a square lattice, using the quantum Monte Carlo method based on the worm (directed-loop) algorithm. Since the SCF state is a state of a pair condensation characterized by {ne}0,=0, and =0, where a and b are the order parameters of the two components, it is important to study behaviors of the pair-correlation function . For this purpose, we propose a choice of the worm head for calculating the pair-correlation function. From this pair correlation, we confirm the Kosterlitz-Thouless character of the SCF phase. The simulation efficiency is also improved in the SCF phase.
Quantum Monte Carlo method for pairing phenomena: Supercounterfluid of two-species Bose gases in optical lattices
NASA Astrophysics Data System (ADS)
Ohgoe, Takahiro; Kawashima, Naoki
2011-02-01
We study the supercounterfluid (SCF) states in the two-component hard-core Bose-Hubbard model on a square lattice, using the quantum Monte Carlo method based on the worm (directed-loop) algorithm. Since the SCF state is a state of a pair condensation characterized by ≠0,=0, and =0, where a and b are the order parameters of the two components, it is important to study behaviors of the pair-correlation function
Simulation of energy absorption spectrum in NaI crystal detector for multiple gamma energy using Monte Carlo method
Wirawan, Rahadi; Waris, Abdul; Djamal, Mitra; Handayani, Gunawan
2015-04-16
The spectrum of gamma energy absorption in the NaI crystal (scintillation detector) is the interaction result of gamma photon with NaI crystal, and it’s associated with the photon gamma energy incoming to the detector. Through a simulation approach, we can perform an early observation of gamma energy absorption spectrum in a scintillator crystal detector (NaI) before the experiment conducted. In this paper, we present a simulation model result of gamma energy absorption spectrum for energy 100-700 keV (i.e. 297 keV, 400 keV and 662 keV). This simulation developed based on the concept of photon beam point source distribution and photon cross section interaction with the Monte Carlo method. Our computational code has been successfully predicting the multiple energy peaks absorption spectrum, which derived from multiple photon energy sources.
Monte Carlo variance reduction
NASA Technical Reports Server (NTRS)
Byrn, N. R.
1980-01-01
Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.
NASA Technical Reports Server (NTRS)
Palmer, Grant; Prabhu, Dinesh; Cruden, Brett A.
2013-01-01
The 2013-2022 Decaedal survey for planetary exploration has identified probe missions to Uranus and Saturn as high priorities. This work endeavors to examine the uncertainty for determining aeroheating in such entry environments. Representative entry trajectories are constructed using the TRAJ software. Flowfields at selected points on the trajectories are then computed using the Data Parallel Line Relaxation (DPLR) Computational Fluid Dynamics Code. A Monte Carlo study is performed on the DPLR input parameters to determine the uncertainty in the predicted aeroheating, and correlation coefficients are examined to identify which input parameters show the most influence on the uncertainty. A review of the present best practices for input parameters (e.g. transport coefficient and vibrational relaxation time) is also conducted. It is found that the 2(sigma) - uncertainty for heating on Uranus entry is no more than 2.1%, assuming an equilibrium catalytic wall, with the uncertainty being determined primarily by diffusion and H(sub 2) recombination rate within the boundary layer. However, if the wall is assumed to be partially or non-catalytic, this uncertainty may increase to as large as 18%. The catalytic wall model can contribute over 3x change in heat flux and a 20% variation in film coefficient. Therefore, coupled material response/fluid dynamic models are recommended for this problem. It was also found that much of this variability is artificially suppressed when a constant Schmidt number approach is implemented. Because the boundary layer is reacting, it is necessary to employ self-consistent effective binary diffusion to obtain a correct thermal transport solution. For Saturn entries, the 2(sigma) - uncertainty for convective heating was less than 3.7%. The major uncertainty driver was dependent on shock temperature/velocity, changing from boundary layer thermal conductivity to diffusivity and then to shock layer ionization rate as velocity increases. While
Monte Carlo comparison of preliminary methods for ordering multiple genetic loci.
Olson, J M; Boehnke, M
1990-01-01
We carried out a simulation study to compare the power of eight methods for preliminary ordering of multiple genetic loci. Using linkage groups of six loci and a simple pedigree structure, we considered the effects on method performance of locus informativity, interlocus spacing, total distance along the chromosome, and sample size. Method performance was assessed using the mean rank of the true order, the proportion of replicates in which the true order was the best order, and the number of orders that needed to be considered for subsequent multipoint linkage analysis in order to include the true order with high probability. A new method which maximizes the sum of adjacent two-point maximum lod scores divided by the equivalent number of informative meioses and the previously described method which minimizes the sum of adjacent recombination fraction estimates were found to be the best overall locus-ordering methods for the situations considered, although several other methods also performed well. PMID:2393021
NASA Astrophysics Data System (ADS)
Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.
2014-05-01
Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution
NASA Technical Reports Server (NTRS)
Jensen, K. A.; Ripoll, J.-F.; Wray, A. A.; Joseph, D.; ElHafi, M.
2004-01-01
Five computational methods for solution of the radiative transfer equation in an absorbing-emitting and non-scattering gray medium were compared on a 2 m JP-8 pool fire. The temperature and absorption coefficient fields were taken from a synthetic fire due to the lack of a complete set of experimental data for fires of this size. These quantities were generated by a code that has been shown to agree well with the limited quantity of relevant data in the literature. Reference solutions to the governing equation were determined using the Monte Carlo method and a ray tracing scheme with high angular resolution. Solutions using the discrete transfer method, the discrete ordinate method (DOM) with both S(sub 4) and LC(sub 11) quadratures, and moment model using the M(sub 1) closure were compared to the reference solutions in both isotropic and anisotropic regions of the computational domain. DOM LC(sub 11) is shown to be the more accurate than the commonly used S(sub 4) quadrature technique, especially in anisotropic regions of the fire domain. This represents the first study where the M(sub 1) method was applied to a combustion problem occurring in a complex three-dimensional geometry. The M(sub 1) results agree well with other solution techniques, which is encouraging for future applications to similar problems since it is computationally the least expensive solution technique. Moreover, M(sub 1) results are comparable to DOM S(sub 4).
NASA Astrophysics Data System (ADS)
Wang, Feng; Pfeiffer, W.; Kouzichine, N.; Qiu, Yuchang; Kuffel, E.; Li, Kenli
2008-02-01
The electron swarm parameters of SF6/N2 are calculated in the present study using an improved Monte Carlo collision simulation method (MCS). And some improved sampling techniques are also adopted. The simulation results show that the improved simulation method can provide more accurate results.
NASA Astrophysics Data System (ADS)
Feroz, F.; Hobson, M. P.
2008-02-01
In performing a Bayesian analysis of astronomical data, two difficult problems often emerge. First, in estimating the parameters of some model for the data, the resulting posterior distribution may be multimodal or exhibit pronounced (curving) degeneracies, which can cause problems for traditional Markov Chain Monte Carlo (MCMC) sampling methods. Secondly, in selecting between a set of competing models, calculation of the Bayesian evidence for each model is computationally expensive using existing methods such as thermodynamic integration. The nested sampling method introduced by Skilling, has greatly reduced the computational expense of calculating evidence and also produces posterior inferences as a by-product. This method has been applied successfully in cosmological applications by Mukherjee, Parkinson & Liddle, but their implementation was efficient only for unimodal distributions without pronounced degeneracies. Shaw, Bridges & Hobson recently introduced a clustered nested sampling method which is significantly more efficient in sampling from multimodal posteriors and also determines the expectation and variance of the final evidence from a single run of the algorithm, hence providing a further increase in efficiency. In this paper, we build on the work of Shaw et al. and present three new methods for sampling and evidence evaluation from distributions that may contain multiple modes and significant degeneracies in very high dimensions; we also present an even more efficient technique for estimating the uncertainty on the evaluated evidence. These methods lead to a further substantial improvement in sampling efficiency and robustness, and are applied to two toy problems to demonstrate the accuracy and economy of the evidence calculation and parameter estimation. Finally, we discuss the use of these methods in performing Bayesian object detection in astronomical data sets, and show that they significantly outperform existing MCMC techniques. An implementation
ERIC Educational Resources Information Center
Carsey, Thomas M.; Harden, Jeffrey J.
2015-01-01
Graduate students in political science come to the discipline interested in exploring important political questions, such as "What causes war?" or "What policies promote economic growth?" However, they typically do not arrive prepared to address those questions using quantitative methods. Graduate methods instructors must…
NASA Astrophysics Data System (ADS)
Wang, Dong; Sun, Shilong; Tse, Peter W.
2015-02-01
A general sequential Monte Carlo method, particularly a general particle filter, attracts much attention in prognostics recently because it is able to on-line estimate posterior probability density functions of the state functions used in a state space model without making restrictive assumptions. In this paper, the general particle filter is introduced to optimize a wavelet filter for extracting bearing fault features. The major innovation of this paper is that a joint posterior probability density function of wavelet parameters is represented by a set of random particles with their associated weights, which is seldom reported. Once the joint posterior probability density function of wavelet parameters is derived, the approximately optimal center frequency and bandwidth can be determined and be used to perform an optimal wavelet filtering for extracting bearing fault features. Two case studies are investigated to illustrate the effectiveness of the proposed method. The results show that the proposed method provides a Bayesian approach to extract bearing fault features. Additionally, the proposed method can be generalized by using different wavelet functions and metrics and be applied more widely to any other situation in which the optimal wavelet filtering is required.
Shi, C. Y.; Xu, X. George; Stabin, Michael G.
2008-07-15
Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.
Sharma, Diksha; Badal, Andreu; Badano, Aldo
2012-04-21
The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code MANTIS, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fastDETECT2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the MANTIS code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify PENELOPE (the open source software package that handles the x-ray and electron transport in MANTIS) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fastDETECT2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybridMANTIS approach achieves a significant speed-up factor of 627 when compared to MANTIS and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybridMANTIS, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical tox-ray transport. The new code requires much less memory than MANTIS and, asa result, allows us to efficiently simulate large area detectors. PMID:22469917
Biotic indices have been used ot assess biological condition by dividing index scores into condition categories. Historically the number of categories has been based on professional judgement. Alternatively, statistical methods such as power analysis can be used to determine the ...
Monte Carlo Library Least Square (MCLLS) Method for Multiple Radioactive Particle Tracking in BPR
NASA Astrophysics Data System (ADS)
Wang, Zhijian; Lee, Kyoung; Gardner, Robin
2010-03-01
In This work, a new method of radioactive particles tracking is proposed. An accurate Detector Response Functions (DRF's) was developed from MCNP5 to generate library for NaI detectors with a significant speed-up factor of 200. This just make possible for the idea of MCLLS method which is used for locating and tracking the radioactive particle in a modular Pebble Bed Reactor (PBR) by searching minimum Chi-square values. The method was tested to work pretty good in our lab condition with a six 2" X 2" NaI detectors array only. This method was introduced in both forward and inverse ways. A single radioactive particle tracking system with three collimated 2" X 2" NaI detectors is used for benchmark purpose.
Mosleh-Shirazi, Mohammad Amin; Karbasi, Sareh; Shahbazi-Gahrouei, Daryoush; Monadi, Shahram
2012-01-01
Full buildup diodes can cause significant dose perturbation if they are used on most or all of radiotherapy fractions. Given the importance of frequent in vivo measurements in complex treatments, using thin buildup (low-perturbation) diodes instead is gathering interest. However, such diodes are strictly unsuitable for high-energy photons; therefore, their use requires evaluation and careful measurement of correction factors (CFs). There is little published data on such factors for low-perturbation diodes, and none on diode characterization for 9 MV X-rays. We report on MCNP4c Monte Carlo models of low-perturbation (EDD5) and medium-perturbation (EDP10) diodes, and a comparison of source-to-surface distance, field size, temperature, and orientation CFs for cobalt-60 and 9 MV beams. Most of the simulation results were within 4% of the measurements. The results suggest against the use of the EDD5 in axial angles beyond ± 50° and exceeding the range 0° to +50° tilt angle at 9 MV. Outside these ranges, although the EDD5 can be used for accurate in vivo dosimetry at 9 MV, its CF variations were found to be 1.5-7.1 times larger than the EDP10 and, therefore, should be applied carefully. Finally, the MCNP diode models are sufficiently reliable tools for independent verification of potentially inaccurate measurements. PMID:23149783
Lee, Y K
2005-01-01
TRIPOLI-4.3 Monte Carlo transport code has been used to evaluate the QUADOS (Quality Assurance of Computational Tools for Dosimetry) problem P4, neutron and photon response of an albedo-type thermoluminescence personal dosemeter (TLD) located on an ISO slab phantom. Two enriched 6LiF and two 7LiF TLD chips were used and they were protected, in front or behind, with a boron-loaded dosemeter-holder. Neutron response of the four chips was determined by counting 6Li(n,t)4He events using ENDF/B-VI.4 library and photon response by estimating absorbed dose (MeV g(-1)). Ten neutron energies from thermal to 20 MeV and six photon energies from 33 keV to 1.25 MeV were used to study the energy dependence. The fraction of the neutron and photon response owing to phantom backscatter has also been investigated. Detailed TRIPOLI-4.3 solutions are presented and compared with MCNP-4C calculations. PMID:16381740
NASA Astrophysics Data System (ADS)
Pan, J.; Durand, M. T.; Vanderjagt, B. J.
2015-12-01
Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.
Geng, Bo; Zhou, Xiaobo; Zhu, Jinmin; Hung, Y S; Wong, Stephen T C
2008-04-01
Computational identification of missing enzymes plays a significant role in accurate and complete reconstruction of metabolic network for both newly sequenced and well-studied organisms. For a metabolic reaction, given a set of candidate enzymes identified according to certain biological evidences, a powerful mathematical model is required to predict the actual enzyme(s) catalyzing the reactions. In this study, several plausible predictive methods are considered for the classification problem in missing enzyme identification, and comparisons are performed with an aim to identify a method with better performance than the Bayesian model used in previous work. In particular, a regression model consisting of a linear term and a nonlinear term is proposed to apply to the problem, in which the reversible jump Markov-chain-Monte-Carlo (MCMC) learning technique (developed in [Andrieu C, Freitas Nando de, Doucet A. Robust full Bayesian learning for radial basis networks 2001;13:2359-407.]) is adopted to estimate the model order and the parameters. We evaluated the models using known reactions in Escherichia coli, Mycobacterium tuberculosis, Vibrio cholerae and Caulobacter cresentus bacteria, as well as one eukaryotic organism, Saccharomyces Cerevisiae. Although support vector regression also exhibits comparable performance in this application, it was demonstrated that the proposed model achieves favorable prediction performance, particularly sensitivity, compared with the Bayesian method. PMID:17950040
Novel phase-space Monte-Carlo method for quench dynamics in 1D and 2D spin models
NASA Astrophysics Data System (ADS)
Pikovski, Alexander; Schachenmayer, Johannes; Rey, Ana Maria
2015-05-01
An important outstanding problem is the effcient numerical computation of quench dynamics in large spin systems. We propose a semiclassical method to study many-body spin dynamics in generic spin lattice models. The method, named DTWA, is based on a novel type of discrete Monte-Carlo sampling in phase-space. We demonstare the power of the technique by comparisons with analytical and numerically exact calculations. It is shown that DTWA captures the dynamics of one- and two-point correlations 1D systems. We also use DTWA to study the dynamics of correlations in 2D systems with many spins and different types of long-range couplings, in regimes where other numerical methods are generally unreliable. Computing spatial and time-dependent correlations, we find a sharp change in the speed of propagation of correlations at a critical range of interactions determined by the system dimension. The investigations are relevant for a broad range of systems including solids, atom-photon systems and ultracold gases of polar molecules, trapped ions, Rydberg, and magnetic atoms. This work has been financially supported by JILA-NSF-PFC-1125844, NSF-PIF-1211914, ARO, AFOSR, AFOSR-MURI.
NASA Astrophysics Data System (ADS)
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
NASA Astrophysics Data System (ADS)
Booth, George H.; Cleland, Deidre; Thom, Alex J. W.; Alavi, Ali
2011-08-01
The full configuration interaction quantum Monte Carlo (FCIQMC) method, as well as its "initiator" extension (i-FCIQMC), is used to tackle the complex electronic structure of the carbon dimer across the entire dissociation reaction coordinate, as a prototypical example of a strongly correlated molecular system. Various basis sets of increasing size up to the large cc-pVQZ are used, spanning a fully accessible N-electron basis of over 1012 Slater determinants, and the accuracy of the method is demonstrated in each basis set. Convergence to the FCI limit is achieved in the largest basis with only O[10^7] walkers within random errorbars of a few tenths of a millihartree across the binding curve, and extensive comparisons to FCI, CCSD(T), MRCI, and CEEIS results are made where possible. A detailed exposition of the convergence properties of the FCIQMC methods is provided, considering convergence with elapsed imaginary time, number of walkers and size of the basis. Various symmetries which can be incorporated into the stochastic dynamic, beyond the standard abelian point group symmetry and spin polarisation are also described. These can have significant benefit to the computational effort of the calculations, as well as the ability to converge to various excited states. The results presented demonstrate a new benchmark accuracy in basis-set energies for systems of this size, significantly improving on previous state of the art estimates.
NASA Astrophysics Data System (ADS)
Bao, J.; Ren, H.; Hou, Z.; Ray, J.; Swiler, L.; Huang, M.
2015-12-01
We developed a novel scalable multi-chain Markov chain Monte Carlo (MCMC) method for high-dimensional inverse problems. The method is scalable in terms of number of chains and processors, and is useful for Bayesian calibration of computationally expensive simulators typically used for scientific and engineering calculations. In this study, we demonstrate two applications of this method for hydraulic and geological inverse problems. The first one is monitoring soil moisture variations using tomographic ground penetrating radar (GPR) travel time data, where challenges exist in the inversion of GPR tomographic data for handling non-uniqueness and nonlinearity and high-dimensionality of unknowns. We integrated the multi-chain MCMC framework with the pilot point concept, a curved-ray GPR forward model, and a sequential Gaussian simulation (SGSIM) algorithm for estimating the dielectric permittivity at pilot point locations distributed within the tomogram, as well as its spatial correlation range, which are used to construct the whole field of dielectric permittivity using SGSIM. The second application is reservoir porosity and saturation estimation using the multi-chain MCMC approach to jointly invert marine seismic amplitude versus angle (AVA) and controlled-source electro-magnetic (CSEM) data for a layered reservoir model, where the unknowns to be estimated include the porosity and fluid saturation in each reservoir layer and the electrical conductivity of the overburden and bedrock. The computational efficiency, accuracy, and convergence behaviors of the inversion approach are systematically evaluated.
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo H.; Good, Brian; Noebe, Ronald D.; Honecy, Frank; Abel, Phillip
1999-01-01
Large-scale simulations of dynamic processes at the atomic level have developed into one of the main areas of work in computational materials science. Until recently, severe computational restrictions, as well as the lack of accurate methods for calculating the energetics, resulted in slower growth in the area than that required by current alloy design programs. The Computational Materials Group at the NASA Lewis Research Center is devoted to the development of powerful, accurate, economical tools to aid in alloy design. These include the BFS (Bozzolo, Ferrante, and Smith) method for alloys (ref. 1) and the development of dedicated software for large-scale simulations based on Monte Carlo- Metropolis numerical techniques, as well as state-of-the-art visualization methods. Our previous effort linking theoretical and computational modeling resulted in the successful prediction of the microstructure of a five-element intermetallic alloy, in excellent agreement with experimental results (refs. 2 and 3). This effort also produced a complete description of the role of alloying additions in intermetallic binary, ternary, and higher order alloys (ref. 4).
Safigholi, Habib; Faghihi, Reza; Jashni, Somaye Karimi; Meigooni, Ali S.
2012-04-15
distribution is less sensitive to the shape of the conical-hemisphere anode than the hemispherical anode. However, the optimized apex angle of conical-hemisphere anode was determined to be 60 deg. For the hemispherical targets, calculated radial dose function values at a distance of 5 cm were 0.137, 0.191, 0.247, and 0.331 for 40, 50, 60, and 80 keV electrons, respectively. These values for the conical-hemisphere targets are 0.165, 0.239, 0.305, and 0.412, respectively. Calculated 2D anisotropy functions values for the hemispherical target shape were F(1 cm, 0 deg.) = 1.438 and F(1 cm, 0 deg.) = 1.465 for 30 and 80 keV electrons, respectively. The corresponding values for conical-hemisphere targets are 1.091 and 1.241, respectively. Conclusions: A method for the characterizations of MEBXS using TG-43U1 dosimetric data using the MC MCNP4C has been presented. The effects of target geometry, thicknesses, and electron source geometry have been investigated. The final choices of MEBXS design are conical-hemisphere target shapes having an apex angle of 60 deg. Tungsten material having an optimized thickness versus electron energy and a 0.9 mm radius of uniform cylinder as a cathode produces optimal electron source characteristics.