Monte Carlo simulation of x-ray spectra in diagnostic radiology and mammography using MCNP4C
NASA Astrophysics Data System (ADS)
Ay, M. R.; Shahriari, M.; Sarkar, S.; Adib, M.; Zaidi, H.
2004-11-01
The general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C) was used for the simulation of x-ray spectra in diagnostic radiology and mammography. The electrons were transported until they slow down and stop in the target. Both bremsstrahlung and characteristic x-ray production were considered in this work. We focus on the simulation of various target/filter combinations to investigate the effect of tube voltage, target material and filter thickness on x-ray spectra in the diagnostic radiology and mammography energy ranges. The simulated x-ray spectra were compared with experimental measurements and spectra calculated by IPEM report number 78. In addition, the anode heel effect and off-axis x-ray spectra were assessed for different anode angles and target materials and the results were compared with EGS4-based Monte Carlo simulations and measured data. Quantitative evaluation of the differences between our Monte Carlo simulated and comparison spectra was performed using student's t-test statistical analysis. Generally, there is a good agreement between the simulated x-ray and comparison spectra, although there are systematic differences between the simulated and reference spectra especially in the K-characteristic x-rays intensity. Nevertheless, no statistically significant differences have been observed between IPEM spectra and the simulated spectra. It has been shown that the difference between MCNP simulated spectra and IPEM spectra in the low energy range is the result of the overestimation of characteristic photons following the normalization procedure. The transmission curves produced by MCNP4C have good agreement with the IPEM report especially for tube voltages of 50 kV and 80 kV. The systematic discrepancy for higher tube voltages is the result of systematic differences between the corresponding spectra.
Development and validation of MCNP4C-based Monte Carlo simulator for fan- and cone-beam x-ray CT.
Ay, Mohammad Reza; Zaidi, Habib
2005-10-21
An x-ray computed tomography (CT) simulator based on the Monte Carlo N-particle radiation transport computer code (MCNP4C) was developed for simulation of both fan- and cone-beam CT scanners. A user-friendly interface running under Matlab 6.5.1 creates the scanner geometry at different views as MCNP4C's input file. The full simulation of x-ray tube, phantom and detectors with single-slice, multi-slice and flat detector configurations was considered. The simulator was validated through comparison with experimental measurements of different nonuniform phantoms with varying sizes on both a clinical and a small-animal CT scanner. There is good agreement between the simulated and measured projections and reconstructed images. Thereafter, the effects of bow-tie filter, phantom size and septa length on scatter distribution in fan-beam CT were studied in detail. The relative difference between detected total, primary and scatter photons for septa length varying between 0 and 95 mm is 11.2%, 1.9% and 84.1%, respectively, whereas the scatter-to-primary ratio decreases by 83.8%. The developed simulator is a powerful tool for evaluating the effect of physical, geometrical and other design parameters on scanner performance and image quality in addition to offering a versatile tool for investigating potential artefacts and correction schemes when using CT-based attenuation correction on dual-modality PET/CT units. PMID:16204878
NASA Astrophysics Data System (ADS)
Pacilio, M.; Aragno, D.; Rauco, R.; D'Onofrio, S.; Pressello, M. C.; Bianciardi, L.; Santini, E.
2007-07-01
The energy dependence of the radiochromic film (RCF) response to beta-emitting sources was studied by dose theoretical calculations, employing the MCNP4C and EGSnrc/BEAMnrc Monte Carlo codes. Irradiations with virtual monochromatic electron sources, electron and photon clinical beams, a 32P intravascular brachytherapy (IVB) source and other beta-emitting radioisotopes (188Re, 90Y, 90Sr/90Y,32P) were simulated. The MD-55-2 and HS radiochromic films (RCFs) were considered, in a planar or cylindrical irradiation geometry, with water or polystyrene as the surrounding medium. For virtual monochromatic sources, a monotonic decrease with energy of the dose absorbed to the film, with respect to that absorbed to the surrounding medium, was evidenced. Considering the IVB 32P source and the MD-55-2 in a cylindrical geometry, the calibration with a 6 MeV electron beam would yield dose underestimations from 14 to 23%, increasing the source-to-film radial distance from 1 to 6 mm. For the planar beta-emitting sources in water, calibrations with photon or electron clinical beams would yield dose underestimations between 5 and 12%. Calibrating the RCF with 90Sr/90Y, the MD-55-2 would yield dose underestimations between 3 and 5% for 32P and discrepancies within ±2% for 188Re and 90Y, whereas for the HS the dose underestimation would reach 4% with 188Re and 6% with 32P.
Calculation of the store house worker dose in a lost wax foundry using MCNP-4C.
Alegría, Natalia; Legarda, Fernando; Herranz, Margarita; Idoeta, Raquel
2005-01-01
Lost wax casting is an industrial process which permits the transmutation into metal of models made in wax. The wax model is covered with a silicaceous shell of the required thickness and once this shell is built the set is heated and wax melted. Liquid metal is then cast into the shell replacing the wax. When the metal is cool, the shell is broken away in order to recover the metallic piece. In this process zircon sands are used for the preparation of the silicaceous shell. These sands have varying concentrations of natural radionuclides: 238U, 232Th and 235U together with their progenics. The zircon sand is distributed in bags of 50 kg, and 30 bags are on a pallet, weighing 1,500 kg. The pallets with the bags have dimensions 80 cm x 120 cm x 80 cm, and constitute the radiation source in this case. The only pathway of exposure to workers in the store house is external radiation. In this case there is no dust because the bags are closed and covered by plastic, the store house has a good ventilation rate and so radon accumulation is not possible. The workers do not touch with their hands the bags and consequently skin contamination will not take place. In this study all situations of external irradiation to the workers have been considered; transportation of the pallets from vehicle to store house, lifting the pallets to the shelf, resting of the stock on the shelf, getting down the pallets, and carrying the pallets to production area. Using MCNP-4C exposure situations have been simulated, considering that the source has a homogeneous composition, the minimum stock in the store house is constituted by 7 pallets, and the several distances between pallets and workers when they are at work. The photons flux obtained by MCNP-4C is multiplied by the conversion factor of Flux to Kerma for air by conversion factor to Effective Dose by Kerma unit, and by the number of emitted photons. Those conversion factors are obtained of ICRP 74 table 1 and table 17 respectively. This is the way to obtain a function giving dose rate around the source. PMID:16604600
NASA Astrophysics Data System (ADS)
Zamani, M.; Kasesaz, Y.; Khalafi, H.; Pooya, S. M. Hosseini
Boron Neutron Capture Therapy (BNCT) is used for treatment of many diseases, including brain tumors, in many medical centers. In this method, a target area (e.g., head of patient) is irradiated by some optimized and suitable neutron fields such as research nuclear reactors. Aiming at protection of healthy tissues which are located in the vicinity of irradiated tissue, and based on the ALARA principle, it is required to prevent unnecessary exposure of these vital organs. In this study, by using numerical simulation method (MCNP4C Code), the absorbed dose in target tissue and the equiavalent dose in different sensitive tissues of a patiant treated by BNCT, are calculated. For this purpose, we have used the parameters of MIRD Standard Phantom. Equiavelent dose in 11 sensitive organs, located in the vicinity of target, and total equivalent dose in whole body, have been calculated. The results show that the absorbed dose in tumor and normal tissue of brain equal to 30.35?Gy and 0.19?Gy, respectively. Also, total equivalent dose in 11 sensitive organs, other than tumor and normal tissue of brain, is equal to 14?mGy. The maximum equivalent doses in organs, other than brain and tumor, appear to the tissues of lungs and thyroid and are equal to 7.35?mSv and 3.00?mSv, respectively.
Dawahra, S; Khattab, K; Saba, G
2015-10-01
A comparative study for fuel conversion from the HEU to LEU in the Miniature Neutron Source Reactor (MNSR) has been performed in this paper using the MCNP4C code. The neutron energy and lethargy flux spectra in the first inner and outer irradiation sites of the MNSR reactor for the existing HEU fuel (UAl4-Al, 90% enriched) and the potential LEU fuels (U3Si2-Al, U3Si-Al, U9Mo-Al, 19.75% enriched and UO2, 12.6% enriched) were investigated using the MCNP4C code. The neutron energy flux spectra for each group was calculated by dividing the neutron flux by the width of each energy group. The neutron flux spectra per unit lethargy was calculated by multiplying the neutron energy flux spectra for each energy group by the average energy of each group. The thermal neutron flux was calculated by summing the neutron fluxes from 0.0 to 0.625 eV, the fast neutron flux was calculated by summing the neutron fluxes from 0.5 MeV to 10 MeV for the existing HEU and potential LEU fuels. Good agreements have been noticed between the flux spectra for the potential LEU fuels and the existing HEU fuels with maximum relative differences less than 10% and 8% in the inner and outer irradiation sites. PMID:26142805
Bagheri, Reza; Afarideh, Hossein; Maragheh, Mohammad Ghannadi; Shirmardi, Seyed Pezhman; Samani, Ali Bahrami
2015-05-01
Bone metastases are major clinical concern that can cause severe problems for patients. Currently, various beta emitters are used for bone pain palliation. This study, describes the process for absorbed dose prediction of selected bone surface and volume-seeking beta emitter radiopharmaceuticals such as (32)P, (89)SrCl2,(90)Y-EDTMP,(153)Sm-EDTMP, (166)Ho-DOTMP, (177)Lu-EDTMP,(186)Re-HEDP, and (188)Re-HEDP in human bone, using MCNP code. Three coaxial sub-cylinders 5?cm in height and 1.2, 2.6, and 7.6?cm in diameter were used for bone marrow, bone, and muscle simulation respectively. The *F8 tally was employed to calculate absorbed dose in the MCNP4C simulations. Results show that with injection of 1?MBq of these radiopharmaceuticals given to a 70?kg adult man, (32)P, (89)SrCl2, and (90)Y-EDTMP radiopharmaceuticals will have the highest amount of bone surface absorbed dose, where beta particles will have the greatest proportion in absorbed dose of bone surface in comparison with gamma radiation. These results demonstrate moderate agreement with available experimental data. PMID:25775234
Monte Carlo N Particle code - Dose distribution of clinical electron beams in inhomogeneous phantoms
Nedaie, H. A.; Mosleh-Shirazi, M. A.; Allahverdi, M.
2013-01-01
Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Applications of Monte Carlo Methods in Calculus.
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
Improved Monte Carlo Renormalization Group Method
Gupta, R.; Wilson, K.G.; Umrigar, C.
1985-01-01
An extensive program to analyse critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated. 9 refs.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
Quantum Monte Carlo methods for nuclear physics
Carlson, J; Pederiva, F; Pieper, Steven C; Schiavilla, R; Schmidt, K E; Wiringa, R B
2014-01-01
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states and transition moments in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucle...
TOPICAL REVIEW Monte Carlo methods for phase equilibria of uids
TOPICAL REVIEW Monte Carlo methods for phase equilibria of uids Athanassios Z. Panagiotopoulos z. Matter, 1999 Electronic mail: thanos@ipst.umd.edu. #12;Monte Carlo methods for phase equilibria of uids 2 the determination of phase equilibria by simulati
Monte Carlo methods to calculate impact probabilities
NASA Astrophysics Data System (ADS)
Rickman, H.; Wi?niowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward infinity, while the Hill sphere method results in a severely underestimated probability. We provide a discussion of the reasons for these differences, and we finally present the results of the MOID method in the form of probability maps for the Earth and Mars on their current orbits. These maps show a relatively flat probability distribution, except for the occurrence of two ridges found at small inclinations and for coinciding projectile/target perihelion distances. Conclusions: Our results verify the standard formulae in the general case, away from the singularities. In fact, severe shortcomings are limited to the immediate vicinity of those extreme orbits. On the other hand, the new Monte Carlo methods can be used without excessive consumption of computer time, and the MOID method avoids the problems associated with the other methods. Appendices are available in electronic form at http://www.aanda.org
An introduction to Monte Carlo methods
NASA Astrophysics Data System (ADS)
Walter, J.-C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.
Density-matrix quantum Monte Carlo method
NASA Astrophysics Data System (ADS)
Blunt, N. S.; Rogers, T. W.; Spencer, J. S.; Foulkes, W. M. C.
2014-06-01
We present a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system at finite temperature. This allows arbitrary reduced density matrix elements and expectation values of complicated nonlocal observables to be evaluated easily. The method resembles full configuration interaction quantum Monte Carlo but works in the space of many-particle operators instead of the space of many-particle wave functions. One simulation provides the density matrix at all temperatures simultaneously, from T =? to T =0, allowing the temperature dependence of expectation values to be studied. The direct sampling of the density matrix also allows the calculation of some previously inaccessible entanglement measures. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices, the concurrence of one-dimensional spin rings, and the Renyi S2 entanglement entropy of various sublattices of the 6×6 Heisenberg model are calculated. The nature of the sign problem in the method is also investigated.
Calculating Pi Using the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Williamson, Timothy
2013-11-01
During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore »interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Quantum Monte Carlo methods for nuclear physics
J. Carlson; S. Gandolfi; F. Pederiva; Steven C. Pieper; R. Schiavilla; K. E. Schmidt; R. B. Wiringa
2015-04-29
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
NASA Astrophysics Data System (ADS)
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-07-01
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Discrete range clustering using Monte Carlo methods
NASA Technical Reports Server (NTRS)
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Methods for Monte Carlo simulations of biomacromolecules
Vitalis, Andreas; Pappu, Rohit V.
2010-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies. PMID:20428473
Improved method for implicit Monte Carlo
Brown, F. B.; Martin, W. R.
2001-01-01
The Implicit Monte Carlo (IMC) method has been used for over 30 years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Reference [2] provided an exact error analysis of IMC for 0-D problems and demonstrated that IMC can exhibit substantial errors when timesteps are large. These temporal errors are inherent in the method and are in addition to spatial discretization errors and approximations that address nonlinearities (due to variation of physical constants). In Reference [3], IMC and four other methods were analyzed in detail and compared on both theoretical grounds and the accuracy of numerical tests. As discussed in, two alternative schemes for solving the radiative transfer equations, the Carter-Forest (C-F) method and the Ahrens-Larsen (A-L) method, do not exhibit the errors found in IMC; for 0-D, both of these methods are exact for all time, while for 3-D, A-L is exact for all time and C-F is exact within a timestep. These methods can yield substantially superior results to IMC.
The Monte Carlo Method. Popular Lectures in Mathematics.
ERIC Educational Resources Information Center
Sobol', I. M.
The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…
Lodwick, Camille J; Spitz, Henry B
2008-06-01
Monte Carlo N-Particle version 4C (MCNP4C) was used to simulate photon interactions associated with in vivo x-ray fluorescence (XRF) measurement of stable lead in bone. Experimental measurements, performed using a cylindrical anthropometric phantom (i.e., surrogate) of the human leg made from tissue substitutes for muscle and bone, revealed a significant difference between the intensity of the observed and predicted coherent backscatter peak. The observed difference was due to the failure of MCNP4C to simulate photon scatter associated with greater than six inverse angstroms of momentum transfer. The MCNP4C source code, photon directory, and photon library were modified to incorporate atomic form factors up to 7.1 inverse angstroms for the high Z elements defined in the K XRF simulation. The intensity of the predicted coherent photon backscatter peak at 88 keV using the modified code increased from 3.50 x 10(-9) to 8.59 x 10(-7) (roughly two orders of magnitude) and compares favorably with the experimental measurements. PMID:18469585
Accelerated Monte Carlo Methods for Coulomb Collisions
NASA Astrophysics Data System (ADS)
Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce
2014-03-01
We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ? in the numerical solution to collisional plasma problems from ?(?-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal ?(?-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.
Monte Carlo Methods Final Project Nests and Tootsie Pops: Bayesian
Monte Carlo Methods Final Project Nests and Tootsie Pops: Bayesian Sampling with Monte Carlo of computing the evidence or inte- grated likelihood Z under a model; the Nested Sampling method Introduction 1 2 Nested Sampling 3 2.1 General Framework
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
SECOND-ORDER CROSS TERMS IN MONTE CARLO DIFFERENTIAL OPERATOR PERTURBATION ESTIMATES
J. A. FAVORITE; D. K. PARSONS
2001-03-01
Given some initial, unperturbed problem and a desired perturbation, a second-order accurate Taylor series perturbation estimate for a Monte Carlo tally that is a function of two or more perturbed variables can be obtained using an implementation of the differential operator method that ignores cross terms, such as in MCNP4C{trademark}. This requires running a base case defined to be halfway between the perturbed and unperturbed states of all of the perturbed variables and doubling the first-order estimate of the effect of perturbing from the ''midpoint'' base case to the desired perturbed case. The difference between such a midpoint perturbation estimate and the standard perturbation estimate (using the endpoints) is a second-order estimate of the sum of the second-order cross terms of the Taylor series expansion. This technique is demonstrated on an analytic fixed-source problem, a Godiva k{sub eff} eigenvalue problem, and a concrete shielding problem. The effect of ignoring the cross terms in all three problems is significant.
An assessment of the MCNP4C weight window
Christopher N. Culbertson; John S. Hendricks
1999-12-01
A new, enhanced weight window generator suite has been developed for MCNP version 4C. The new generator correctly estimates importances in either a user-specified, geometry-independent, orthogonal grid or in MCNP geometric cells. The geometry-independent option alleviates the need to subdivide the MCNP cell geometry for variance reduction purposes. In addition, the new suite corrects several pathologies in the existing MCNP weight window generator. The new generator is applied in a set of five variance reduction problems. The improved generator is compared with the weight window generator applied in MCNP4B. The benefits of the new methodology are highlighted, along with a description of its limitations. The authors also provide recommendations for utilization of the weight window generator.
COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT
W. R. MARTIN; F. B. BROWN
2001-03-01
Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Monte Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS A. Kersch1 W. Moroko2 A. Schuster1 1Siemens of Quasi-Monte Carlo to this problem. 1.1 Radiative Heat Transfer Reactors In the manufacturing of the problems which can be solved by such a simulation is high accuracy modeling of the radiative heat transfer
Spectral backward Monte Carlo method for surface infrared image simulation
NASA Astrophysics Data System (ADS)
Sun, Haifeng; Xia, Xinlin; Sun, Chuang; Chen, Xue
2014-11-01
The surface infrared radiation is an important part that contributes to the infrared image of the airplane. The Monte Carlo method for the infrared image calculation is suitable for the complex geometry of targets like airplanes. The backward Monte Carlo method is prior to the forward Monte Carlo method for the usually long distance between targets and the detector. Similar to the non-gray absorbing media, the random number relation is developed for the radiation of the spectral surface. In the backward Monte Carlo method, one random number that reverses the wave length (or wave number) may result deferent wave numbers for targets' surface elements on the track of a photon bundle. Through the manipulation of the densities of a photon bundles in arbitrary small intervals near wave numbers, all the wave lengths corresponding to one random number on the targets' surface elements on the track of the photon bundle are kept the same to keep the balance of the energy of the photon bundle. The model developed together with the energy partition model is incorporated into the backward Monte Carlo method to form the spectral backward Monte Carlo method. The developed backward Monte Carlo method is used to calculate the infrared images of a simple configuration with two gray spectral bands, and the efficiency of it is validated by compared the results of it to that of the non-spectral backward Monte Carlo method . Then the validated spectral backward Monte Carlo method is used to simulate the infrared image of the SDM airplane model with spectral surface, and the distribution of received infrared radiation flux of pixels in the detector is analyzed.
Low variance methods for Monte Carlo simulation of phonon transport
Péraud, Jean-Philippe M. (Jean-Philippe Michel)
2011-01-01
Computational studies in kinetic transport are of great use in micro and nanotechnologies. In this work, we focus on Monte Carlo methods for phonon transport, intended for studies in microscale heat transfer. After reviewing ...
Monte Carlo methods and applications in nuclear physics
Carlson, J.
1990-01-01
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-01-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
Combinatorial nuclear level density by a Monte Carlo method
N. Cerf
1993-09-14
We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states, and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations.
Study of the Transition Flow Regime using Monte Carlo Methods
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
Monte Carlo Methods for Tempo Tracking and Rhythm Quantization
Cemgil, A T; 10.1613/jair.1121
2011-01-01
We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcr...
Observations on variational and projector Monte Carlo methods
NASA Astrophysics Data System (ADS)
Umrigar, C. J.
2015-10-01
Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.
The Monte Carlo method in quantum field theory
Colin Morningstar
2007-02-20
This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.
Zeinali-Rafsanjani, B.; Mosleh-Shirazi, M. A.; Faghihi, R.; Karbasi, S.; Mosalaei, A.
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
Zeinali-Rafsanjani, B; Mosleh-Shirazi, M A; Faghihi, R; Karbasi, S; Mosalaei, A
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
Neves, Lucio P; Silva, Eric A B; Perini, Ana P; Maidana, Nora L; Caldas, Linda V E
2012-07-01
The extrapolation chamber is a parallel-plate ionization chamber that allows variation of its air-cavity volume. In this work, an experimental study and MCNP-4C Monte Carlo code simulations of an ionization chamber designed and constructed at the Calibration Laboratory at IPEN to be used as a secondary dosimetry standard for low-energy X-rays are reported. The results obtained were within the international recommendations, and the simulations showed that the components of the extrapolation chamber may influence its response up to 11.0%. PMID:22182629
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
A new method to assess Monte Carlo convergence
Forster, R.A.; Booth, T.E.; Pederson, S.P.
1993-05-01
The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1{radical}N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and {integral}{sub {minus}{infinity}}{sup {infinity}} f(x) dx = 1. Since f(x) is seldom known explicitly, Monte Carlo particle random walks sample f(x) implicitly. Unless there is a largest possible history score, the empirical f(x) must eventually decrease more steeply than l/x{sup 3} for the second moment ({integral}{sub {minus}{infinity}}{sup {infinity}} x{sup 2}f(x) dx) to exist.
Imaginary time correlations within Quantum Monte Carlo methods Markus Holzmann
van Tiggelen, Bart
Imaginary time correlations within Quantum Monte Carlo methods Markus Holzmann LPMMC, CNRS-UJF, BP integral calculations also give access to imaginary time correlations which contain important information about the real time evolution of the quantum system in the linear response regime, experimentally
MCMs: Early History and The Basics Monte Carlo Methods
Mascagni, Michael
MCMs: Early History and The Basics Monte Carlo Methods: Early History and The Basics Prof. Michael Mascagni Department of Computer Science Department of Mathematics Department of Scientific Computing: http://www.cs.fsu.edu/mascagni #12;MCMs: Early History and The Basics Outline of the Talk Early History
On the Gap-Tooth direct simulation Monte Carlo method
Armour, Jessica D
2012-01-01
This thesis develops and evaluates Gap-tooth DSMC (GT-DSMC), a direct Monte Carlo simulation procedure for dilute gases combined with the Gap-tooth method of Gear, Li, and Kevrekidis. The latter was proposed as a means of ...
Monte Carlo methods for light propagation in biological tissues.
Vinckenbosch, Laura; Lacaux, Céline; Tindel, Samy; Thomassin, Magalie; Obara, Tiphaine
2015-11-01
Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis-Hastings algorithm. The resulting estimating methods are then compared to the so-called Wang-Prahl (or Wang) method. Finally, the formal representation allows to derive a non-linear optimization algorithm close to Levenberg-Marquardt that is used for the estimation of the scattering and absorption coefficients of the tissue from measurements. PMID:26362232
A new method to assess Monte Carlo convergence
Forster, R.A.; Booth, T.E.; Pederson, S.P.
1993-01-01
The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1[radical]N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and [integral][sub [minus][infinity
Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems
NASA Astrophysics Data System (ADS)
Slattery, Stuart R.
This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It was found that for the current implementation of MCSA, both weak and strong scaling improved on that observed for production implementations of Krylov methods.
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. PMID:16381723
Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination
Liu, B; Xu, J; Liu, T; Ouyang, X
2012-01-01
Objective To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Methods Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or ?-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a 252Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV D–D neutron generator can create neutrons at up to 1013 n s?1 with current technology. All these enable an effective and low-cost method of killing anthrax spores. Results There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. Conclusion The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g 252Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV D–D neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV D–D neutron generator output >1013 n s?1 should be attainable in the near future. This indicates that we could use a D–D neutron generator to sterilise anthrax contamination within several seconds. PMID:22573293
Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )
1991-01-01
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.
Stabilized multilevel Monte Carlo method for stiff stochastic differential equations
Abdulle, Assyr Blumenthal, Adrian
2013-10-15
A multilevel Monte Carlo (MLMC) method for mean square stable stochastic differential equations with multiple scales is proposed. For such problems, that we call stiff, the performance of MLMC methods based on classical explicit methods deteriorates because of the time step restriction to resolve the fastest scales that prevents to exploit all the levels of the MLMC approach. We show that by switching to explicit stabilized stochastic methods and balancing the stabilization procedure simultaneously with the hierarchical sampling strategy of MLMC methods, the computational cost for stiff systems is significantly reduced, while keeping the computational algorithm fully explicit and easy to implement. Numerical experiments on linear and nonlinear stochastic differential equations and on a stochastic partial differential equation illustrate the performance of the stabilized MLMC method and corroborate our theoretical findings.
Improved criticality convergence via a modified Monte Carlo iteration method
Booth, Thomas E; Gubernatis, James E
2009-01-01
Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.
A simple eigenfunction convergence acceleration method for Monte Carlo
Booth, Thomas E
2010-11-18
Monte Carlo transport codes typically use a power iteration method to obtain the fundamental eigenfunction. The standard convergence rate for the power iteration method is the ratio of the first two eigenvalues, that is, k{sub 2}/k{sub 1}. Modifications to the power method have accelerated the convergence by explicitly calculating the subdominant eigenfunctions as well as the fundamental. Calculating the subdominant eigenfunctions requires using particles of negative and positive weights and appropriately canceling the negative and positive weight particles. Incorporating both negative weights and a {+-} weight cancellation requires a significant change to current transport codes. This paper presents an alternative convergence acceleration method that does not require modifying the transport codes to deal with the problems associated with tracking and cancelling particles of {+-} weights. Instead, only positive weights are used in the acceleration method.
A general method for debiasing a Monte Carlo estimator
McLeish, Don
2010-01-01
Consider a process, stochastic or deterministic, obtained by using a numerical integration scheme, or from Monte-Carlo methods involving an approximation to an integral, or a Newton-Raphson iteration to approximate the root of an equation. We will assume that we can sample from the distribution of the process from time 0 to finite time n. We propose a scheme for unbiased estimation of the limiting value of the process, together with estimates of standard error and apply this to examples including numerical integrals, root-finding and option pricing in a Heston Stochastic Volatility model. This results in unbiased estimators in place of biased ones i nmany potential applications.
Analysis of real-time networks with monte carlo methods
NASA Astrophysics Data System (ADS)
Mauclair, C.; Durrieu, G.
2013-12-01
Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.
Application of Monte Carlo methods in tomotherapy and radiation biophysics
NASA Astrophysics Data System (ADS)
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published experimental and theoretical studies for 60Co gamma-rays and low-energy x-rays. The reported studies provide new information about the potential biological consequences of diagnostic x-rays and selected gamma-emitting radioisotopes used in brachytherapy for the treatment of cancer. The proposed methodology is computationally efficient and may also be useful in proton therapy, space applications or internal dosimetry.
Markov-Chain Monte Carlo Methods for Simulations of Biomolecules
Bernd A. Berg
2007-09-04
The computer revolution has been driven by a sustained increase of computational speed of approximately one order of magnitude (a factor of ten) every five years since about 1950. In natural sciences this has led to a continuous increase of the importance of computer simulations. Major enabling techniques are Markov Chain Monte Carlo (MCMC) and Molecular Dynamics (MD) simulations. This article deals with the MCMC approach. First basic simulation techniques, as well as methods for their statistical analysis are reviewed. Afterwards the focus is on generalized ensembles and biased updating, two advanced techniques, which are of relevance for simulations of biomolecules, or are expected to become relevant with that respect. In particular we consider the multicanonical ensemble and the replica exchange method (also known as parallel tempering or method of multiple Markov chains).
A Monte Carlo Method for Calculating Initiation Probability
Greenman, G M; Procassini, R J; Clouse, C J
2007-03-05
A Monte Carlo method for calculating the probability of initiating a self-sustaining neutron chain reaction has been developed. In contrast to deterministic codes which solve a non-linear, adjoint form of the Boltzmann equation to calculate initiation probability, this new method solves the forward (standard) form of the equation using a modified source calculation technique. Results from this new method are compared with results obtained from several deterministic codes for a suite of historical test problems. The level of agreement between these code predictions is quite good, considering the use of different numerical techniques and nuclear data. A set of modifications to the historical test problems has also been developed which reduces the impact of neutron source ambiguities on the calculated probabilities.
ITER Neutronics Modeling Using Hybrid Monte Carlo/Deterministic and CAD-Based Monte Carlo Methods
Ibrahim, A.; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Sawan, M.; Wilson, P.; Wagner, John C; Heltemes, Thad
2011-01-01
The immense size and complex geometry of the ITER experimental fusion reactor require the development of special techniques that can accurately and efficiently perform neutronics simulations with minimal human effort. This paper shows the effect of the hybrid Monte Carlo (MC)/deterministic techniques - Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) - in enhancing the efficiency of the neutronics modeling of ITER and demonstrates the applicability of coupling these methods with computer-aided-design-based MC. Three quantities were calculated in this analysis: the total nuclear heating in the inboard leg of the toroidal field coils (TFCs), the prompt dose outside the biological shield, and the total neutron and gamma fluxes over a mesh tally covering the entire reactor. The use of FW-CADIS in estimating the nuclear heating in the inboard TFCs resulted in a factor of ~ 275 increase in the MC figure of merit (FOM) compared with analog MC and a factor of ~ 9 compared with the traditional methods of variance reduction. By providing a factor of ~ 21 000 increase in the MC FOM, the radiation dose calculation showed how the CADIS method can be effectively used in the simulation of problems that are practically impossible using analog MC. The total flux calculation demonstrated the ability of FW-CADIS to simultaneously enhance the MC statistical precision throughout the entire ITER geometry. Collectively, these calculations demonstrate the ability of the hybrid techniques to accurately model very challenging shielding problems in reasonable execution times.
Kasesaz, Y; Khalafi, H; Rahmani, F
2013-12-01
Optimization of the Beam Shaping Assembly (BSA) has been performed using the MCNP4C Monte Carlo code to shape the 2.45 MeV neutrons that are produced in the D-D neutron generator. Optimal design of the BSA has been chosen by considering in-air figures of merit (FOM) which consists of 70 cm Fluental as a moderator, 30 cm Pb as a reflector, 2mm (6)Li as a thermal neutron filter and 2mm Pb as a gamma filter. The neutron beam can be evaluated by in-phantom parameters, from which therapeutic gain can be derived. Direct evaluation of both set of FOMs (in-air and in-phantom) is very time consuming. In this paper a Response Matrix (RM) method has been suggested to reduce the computing time. This method is based on considering the neutron spectrum at the beam exit and calculating contribution of various dose components in phantom to calculate the Response Matrix. Results show good agreement between direct calculation and the RM method. PMID:23954283
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Heavy deformed nuclei in the shell model Monte Carlo method
Alhassid, Y; Nakada, H
2007-01-01
We extend the shell model Monte Carlo approach to heavy deformed nuclei using a new proton-neutron formalism. The low excitation energies of such nuclei necessitate calculations at low temperatures for which a stabilization method is implemented in the canonical ensemble. We apply the method to study a well deformed rare-earth nucleus, 162Dy. The single-particle model space includes the 50-82 shell plus 1f_{7/2} orbital for protons and the 82-126 shell plus 0h_{11/2}, 1g_{9/2} orbitals for neutrons. We show that the spherical shell model reproduces well the rotational character of 162Dy within this model space. We also calculate the level density of 162Dy and find it to be in excellent agreement with the experimental level density, which we extract from several experiments.
Heavy deformed nuclei in the shell model Monte Carlo method
Y. Alhassid; L. Fang; H. Nakada
2007-10-09
We extend the shell model Monte Carlo approach to heavy deformed nuclei using a new proton-neutron formalism. The low excitation energies of such nuclei necessitate calculations at low temperatures for which a stabilization method is implemented in the canonical ensemble. We apply the method to study a well deformed rare-earth nucleus, 162Dy. The single-particle model space includes the 50-82 shell plus 1f_{7/2} orbital for protons and the 82-126 shell plus 0h_{11/2}, 1g_{9/2} orbitals for neutrons. We show that the spherical shell model reproduces well the rotational character of 162Dy within this model space. We also calculate the level density of 162Dy and find it to be in excellent agreement with the experimental level density, which we extract from several experiments.
Frozen core method in auxiliary-field quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Purwanto, Wirawan; Zhang, Shiwei; Krakauer, Henry
2012-02-01
We present the implementation of the frozen-core approach in the phaseless auxiliary-field quantum Monte Carlo method (AFQMC). Since AFQMC random walks take place in a many-electron Hilbert space spanned by a chosen one-particle basis, this approach can be achieved without introducing additional approximations, such as pseudopotentials. In parallel to many-body quantum chemistry methods, tightly-bound inner electrons occupy frozen canonical orbitals, which are determined from a lower level of theory, e.g. Hartree-Fock or CASSCF. This provides significant computational savings over fully correlated all-electron treatments, while retaining excellent transferability and accuracy. Results for several systems will be presented. This includes the notoriously difficult Cr2 molecule, where comparisons can be made with near-exact results in small basis sets, as well as an initial implementation in periodic systems.
Application of Monte Carlo method to laser coding detection
NASA Astrophysics Data System (ADS)
Wang, Wei; Li, Wei; Song, Xiao-tong; Yu, Tao
2015-10-01
Based on the requirements of engineering design and improving the detection ability of laser detector, the Monte Carlo method is adopted to analyze and compare the statistic distributed rules of the detection probability, the false probability, the ratio of signal to noise and the threshold value of detecting circuit for laser detector using pseudo-random code pulse and equidistant pulse system. The simulation results show that the signal to noise ratio for pseudo-random code pulse system is about three times bigger than the equidistant pulse system, the detecting threshold for pseudo-random code pulse system is one times bigger than the equidistant pulse system, and the pseudo-random code pulse system excels the equidistant pulse system in consistency of the simulation curves. A conclusion can be made that the ability of laser detector using pseudo-random code pulse system is better than laser detector using equidistant pulse system.
NASA Astrophysics Data System (ADS)
Lodwick, Camille J.
This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.
A wave-function Monte Carlo method for simulating conditional master equations
Kurt Jacobs
2010-01-21
Wave-function Monte Carlo methods are an important tool for simulating quantum systems, but the standard method cannot be used to simulate decoherence in continuously measured systems. Here we present a new Monte Carlo method for such systems. This was used to perform the simulations of a continuously measured nano-resonator in [Phys. Rev. Lett. 102, 057208 (2009)].
Underwater Optical Wireless Channel Modeling Using Monte-Carlo Method
NASA Astrophysics Data System (ADS)
Saini, P. Sri; Prince, Shanthi
2011-10-01
At present, there is a lot of interest in the functioning of the marine environment. Unmanned or Autonomous Underwater Vehicles (UUVs or AUVs) are used in the exploration of the underwater resources, pollution monitoring, disaster prevention etc. Underwater, where radio waves do not propagate, acoustic communication is being used. But, underwater communication is moving towards Optical Communication which has higher bandwidth when compared to Acoustic Communication but has shorter range comparatively. Underwater Optical Wireless Communication (OWC) is mainly affected by the absorption and scattering of the optical signal. In coastal waters, both inherent and apparent optical properties (IOPs and AOPs) are influenced by a wide array of physical, biological and chemical processes leading to optical variability. The scattering effect has two effects: the attenuation of the signal and the Inter-Symbol Interference (ISI) of the signal. However, the Inter-Symbol Interference is ignored in the present paper. Therefore, in order to have an efficient underwater OWC link it is necessary to model the channel efficiently. In this paper, the underwater optical channel is modeled using Monte-Carlo method. The Monte Carlo approach provides the most general and most flexible technique for numerically solving the equations of Radiative transfer. The attenuation co-efficient of the light signal is studied as a function of the absorption (a) and scattering (b) coefficients. It has been observed that for pure sea water and for less chlorophyll conditions blue wavelength is less absorbed whereas for chlorophyll rich environment red wavelength signal is absorbed less comparative to blue and green wavelength.
The simulation of the recharging method of active medical implant based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Kong, Xianyue; Song, Yong; Hao, Qun; Cao, Jie; Zhang, Xiaoyu; Dai, Pantao; Li, Wansong
2014-11-01
The recharging of Active Medical Implant (AMI) is an important issue for its future application. In this paper, a method for recharging active medical implant using wearable incoherent light source has been proposed. Firstly, the models of the recharging method are developed. Secondly, the recharging processes of the proposed method have been simulated by using Monte Carlo (MC) method. Finally, some important conclusions have been reached. The results indicate that the proposed method will help to result in a convenient, safe and low-cost recharging method of AMI, which will promote the application of this kind of implantable device.
MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD
K. HANSON
2001-02-01
The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
.g. a wing) complex geometries (e.g. an aircraft in landing configuration) adding new features to design. Mike Giles (Oxford) Monte Carlo methods 3 / 23 PDEs with Uncertainty The big move now is towards, and then using data to derive an improved a posteriori distribution. Mike Giles (Oxford) Monte Carlo methods 4
Quantum Monte Carlo methods and lithium cluster properties
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Medical Imaging Image Quality Assessment with Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.
Treatment planning aspects and Monte Carlo methods in proton therapy
NASA Astrophysics Data System (ADS)
Fix, Michael K.; Manser, Peter
2015-05-01
Over the last years, the interest in proton radiotherapy is rapidly increasing. Protons provide superior physical properties compared with conventional radiotherapy using photons. These properties result in depth dose curves with a large dose peak at the end of the proton track and the finite proton range allows sparing the distally located healthy tissue. These properties offer an increased flexibility in proton radiotherapy, but also increase the demand in accurate dose estimations. To carry out accurate dose calculations, first an accurate and detailed characterization of the physical proton beam exiting the treatment head is necessary for both currently available delivery techniques: scattered and scanned proton beams. Since Monte Carlo (MC) methods follow the particle track simulating the interactions from first principles, this technique is perfectly suited to accurately model the treatment head. Nevertheless, careful validation of these MC models is necessary. While for the dose estimation pencil beam algorithms provide the advantage of fast computations, they are limited in accuracy. In contrast, MC dose calculation algorithms overcome these limitations and due to recent improvements in efficiency, these algorithms are expected to improve the accuracy of the calculated dose distributions and to be introduced in clinical routine in the near future.
Monte Carlo methods for light propagation in biological tissues
2014-01-08
Jan 8, 2014 ... 2010 Mathematics Subject Classification. ..... of K smaller cubes (voxels in the image processing terminology) {Vk,k = 0,...,K ?1}, whose volume ...... The noise coming from the Monte Carlo estimation of the score, of its gradient ...
Liu, Chang
2009-05-15
. There is much unknown about the underlying reservoir model, which has many uncertain parameters. MCMC (Markov Chain Monte Carlo) is a more statistically rigorous sampling method, with a stronger theoretical base than other methods. The performance of the MCMC...
Evaluation of measurement uncertainty and its numerical calculation by a Monte Carlo method
NASA Astrophysics Data System (ADS)
Wübbeler, Gerd; Krystek, Michael; Elster, Clemens
2008-08-01
The Guide to the Expression of Uncertainty in Measurement (GUM) is the de facto standard for the evaluation of measurement uncertainty in metrology. Recently, evaluation of measurement uncertainty has been proposed on the basis of probability density functions (PDFs) using a Monte Carlo method. The relation between this PDF approach and the standard method described in the GUM is outlined. The Monte Carlo method required for the numerical calculation of the PDF approach is described and illustrated by its application to two examples. The results obtained by the Monte Carlo method for the two examples are compared to the corresponding results when applying the GUM.
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
A Scalable Parallel Monte Carlo Method for Free Energy Simulations of Molecular Systems
Chan, Derek Y C
A Scalable Parallel Monte Carlo Method for Free Energy Simulations of Molecular Systems MALEK O to the system Hamiltonian. This external potential is related to the free energy. In the parallel implementation77, 2005 Key words: parallel computing; high performance computing; Monte Carlo; free energy; molecular
Goddard III, William A.
The continuous configurational Boltzmann biased direct Monte Carlo method for free energyBB with 400 chains leads to an accuracy of 0.1% in the free energy whereas simple sampling direct Monte Carlo, and to evaluating the mixing free energy for polymer blends. © 1997 American Institute of Physics. S0021-9606 97
Widom, Michael
2011-01-01
PHYSICAL REVIEW E 84, 061912 (2011) Kinetic Monte Carlo method applied to nucleic acid hairpin December 2011) Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure states. Secondary structure models of nucleic acids, which record the pairings of complementary
A Monte Carlo method for the PDF equations of turbulent flow
Pope, S. B.
1980-01-01
A Monte Carlo method is presented which simulates the transport equations of joint probability density functions (pdf's) in turbulent flows. (Finite-difference solutions of the equations are impracticable, mainly because ...
Multivariate Population Balances via Moment and Monte Carlo Simulation Methods: An Important Sol with a population balance equation governing evolution of the "dispersed" (suspended) particle population. Early, hopefully, motivate a broader attack on important multivariate population balance problems, including those
Uncertainty Analysis in Upscaling Well Log data By Markov Chain Monte Carlo Method
Hwang, Kyubum
2010-01-16
, densities, and thicknesses of rocks through upscaling well log data, the Markov Chain Monte Carlo (MCMC) method is a potentially beneficial tool that uses randomly generated parameters with a Bayesian framework producing the posterior information...
Walsh, Jonathan A. (Jonathan Alan)
2014-01-01
This thesis presents the development and analysis of computational methods for efficiently accessing and utilizing nuclear data in Monte Carlo neutron transport code simulations. Using the OpenMC code, profiling studies ...
Frequency-domain deviational Monte Carlo method for linear oscillatory gas flows
NASA Astrophysics Data System (ADS)
Ladiges, Daniel R.; Sader, John E.
2015-10-01
Oscillatory non-continuum low Mach number gas flows are often generated by nanomechanical devices in ambient conditions. These flows can be simulated using a range of particle based Monte Carlo techniques, which in their original form operate exclusively in the time-domain. Recently, a frequency-domain weight-based Monte Carlo method was proposed [D. R. Ladiges and J. E. Sader, "Frequency-domain Monte Carlo method for linear oscillatory gas flows," J. Comput. Phys. 284, 351-366 (2015)] that exhibits superior statistical convergence when simulating oscillatory flows. This previous method used the Bhatnagar-Gross-Krook (BGK) kinetic model and contains a "virtual-time" variable to maintain the inherent time-marching nature of existing Monte Carlo algorithms. Here, we propose an alternative frequency-domain deviational Monte Carlo method that facilitates the use of a wider range of molecular models and more efficient collision/relaxation operators. We demonstrate this method with oscillatory Couette flow and the flow generated by an oscillating sphere, utilizing both the BGK kinetic model and hard sphere particles. We also discuss how oscillatory motion of arbitrary time-dependence can be simulated using computationally efficient parallelization. As in the weight-based method, this deviational frequency-domain Monte Carlo method is shown to offer improved computational speed compared to the equivalent time-domain technique.
Efficient, automated Monte Carlo methods for radiation transport
Kong Rong; Ambrose, Martin; Spanier, Jerome
2008-11-20
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.
Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.
Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari
2014-01-01
Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method. PMID:24892069
Enhanced physics design with hexagonal repeated structure tools using Monte Carlo methods
Carter, L L; Lan, J S; Schwarz, R A
1991-01-01
This report discusses proposed new missions for the Fast Flux Test Facility (FFTF) reactor which involve the use of target assemblies containing local hydrogenous moderation within this otherwise fast reactor. Parametric physics design studies with Monte Carlo methods are routinely utilized to analyze the rapidly changing neutron spectrum. An extensive utilization of the hexagonal lattice within lattice capabilities of the Monte Carlo Neutron Photon (MCNP) continuous energy Monte Carlo computer code is applied here to solving such problems. Simpler examples that use the lattice capability to describe fuel pins within a brute force'' description of the hexagonal assemblies are also given.
A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT
Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Loeffler, Frank; Schnetter, Erik
2012-08-20
Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.
Digitally reconstructed radiograph generation by an adaptive Monte Carlo method
NASA Astrophysics Data System (ADS)
Li, Xiaoliang; Yang, Jie; Zhu, Yuemin
2006-06-01
Digitally reconstructed radiograph (DRR) generation is an important step in several medical imaging applications such as 2D-3D image registration, where the generation of DRR is a rate-limiting step. We present a novel DRR generation technique, called the adaptive Monte Carlo volume rendering (AMCVR) algorithm. It is based on the conventional Monte Carlo volume rendering (MCVR) technique that is very efficient for rendering large medical datasets. In contrast to the MCVR, the AMCVR does not produce sample points by sampling directly in the entire volume domain. Instead, it adaptively divides the entire volume domain into sub-domains using importance separation and then performs sampling in these sub-domains. As a result, the AMCVR produces almost the same image quality as that obtained with the MCVR while only using half samples, and increases projection speed by a factor of 2. Moreover, the AMCVR is suitable for fast memory addressing, which further improves processing speed. Independent of the size of medical datasets, the AMCVR allows for achieving a frame rate of about 15 Hz on a 2.8 GHz Pentium 4 PC while generating reasonably good quality DRR.
Ultracold atoms at unitarity within quantum Monte Carlo methods
Morris, Andrew J.; Lopez Rios, P.; Needs, R. J.
2010-03-15
Variational and diffusion quantum Monte Carlo (VMC and DMC) calculations of the properties of the zero-temperature fermionic gas at unitarity are reported. Our study differs from earlier ones mainly in that we have constructed more accurate trial wave functions and used a larger system size, we have studied the dependence of the energy on the particle density and well width, and we have achieved much smaller statistical error bars. The correct value of the universal ratio of the energy of the interacting to that of the noninteracting gas, {xi}, is still a matter of debate. We find DMC values of {xi} of 0.4244(1) with 66 particles and 0.4339(1) with 128 particles. The spherically averaged pair-correlation functions, momentum densities, and one-body density matrices are very similar in VMC and DMC, which suggests that our results for these quantities are very accurate. We find, however, some differences between the VMC and DMC results for the two-body density matrices and condensate fractions, which indicates that these quantities are more sensitive to the quality of the trial wave function. Our best estimate of the condensate fraction of 0.51 is smaller than the values from earlier quantum Monte Carlo calculations.
Time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B
2009-01-01
We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.
A Monte Carlo Synthetic-Acceleration Method for Solving the Thermal Radiation Diffusion Equation
Evans, Thomas M; Mosher, Scott W; Slattery, Stuart
2014-01-01
We present a novel synthetic-acceleration based Monte Carlo method for solving the equilibrium thermal radiation diusion equation in three dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that not only can our Monte Carlo method be an eective solver for sparse matrix systems, but also that it performs competitively with deterministic methods including preconditioned Conjugate Gradient while producing numerically identical results. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.
A Monte Carlo synthetic-acceleration method for solving the thermal radiation diffusion equation
Evans, Thomas M.; Mosher, Scott W.; Slattery, Stuart R.; Hamilton, Steven P.
2014-02-01
We present a novel synthetic-acceleration-based Monte Carlo method for solving the equilibrium thermal radiation diffusion equation in three spatial dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that our Monte Carlo method is an effective solver for sparse matrix systems. For solutions converged to the same tolerance, it performs competitively with deterministic methods including preconditioned conjugate gradient and GMRES. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.
Conformation-family Monte Carlo: A new method for crystal structure prediction
Pillardy, Jaroslaw; Arnautova, Yelena A.; Czaplewski, Cezary; Gibson, Kenneth D.; Scheraga, Harold A.
2001-01-01
A new global optimization method, Conformation-family Monte Carlo, has been developed recently for searching the conformational space of macromolecules. In the present paper, we adapted this method for prediction of crystal structures of organic molecules without assuming any symmetry constraints except the number of molecules in the unit cell. This method maintains a database of low energy structures that are clustered into families. The structures in this database are improved iteratively by a Metropolis-type Monte Carlo procedure together with energy minimization, in which the search is biased toward the regions of the lowest energy families. The Conformation-family Monte Carlo method is applied to a set of nine rigid and flexible organic molecules by using two popular force fields, AMBER and W99. The method performed well for the rigid molecules and reasonably well for the molecules with torsional degrees of freedom. PMID:11606783
Bold diagrammatic Monte Carlo method applied to fermionized frustrated spins.
Kulagin, S A; Prokof'ev, N; Starykh, O A; Svistunov, B; Varney, C N
2013-02-15
We demonstrate, by considering the triangular lattice spin-1/2 Heisenberg model, that Monte Carlo sampling of skeleton Feynman diagrams within the fermionization framework offers a universal first-principles tool for strongly correlated lattice quantum systems. We observe the fermionic sign blessing--cancellation of higher order diagrams leading to a finite convergence radius of the series. We calculate the magnetic susceptibility of the triangular-lattice quantum antiferromagnet in the correlated paramagnet regime and reveal a surprisingly accurate microscopic correspondence with its classical counterpart at all accessible temperatures. The extrapolation of the observed relation to zero temperature suggests the absence of the magnetic order in the ground state. We critically examine the implications of this unusual scenario. PMID:25166359
Bahreyni Toossi, Mohammad Taghi; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Meigooni, Ali Soleimani
2012-01-01
Background Dosimetric characteristics of a high dose rate (HDR) GZP6 Co-60 brachytherapy source have been evaluated following American Association of Physicists in MedicineTask Group 43U1 (AAPM TG-43U1) recommendations for their clinical applications. Materials and methods MCNP-4C and MCNPX Monte Carlo codes were utilized to calculate dose rate constant, two dimensional (2D) dose distribution, radial dose function and 2D anisotropy function of the source. These parameters of this source are compared with the available data for Ralstron 60Co and microSelectron192Ir sources. Besides, a superimposition method was developed to extend the obtained results for the GZP6 source No. 3 to other GZP6 sources. Results The simulated value for dose rate constant for GZP6 source was 1.104±0.03 cGyh-1U-1. The graphical and tabulated radial dose function and 2D anisotropy function of this source are presented here. The results of these investigations show that the dosimetric parameters of GZP6 source are comparable to those for the Ralstron source. While dose rate constant for the two 60Co sources are similar to that for the microSelectron192Ir source, there are differences between radial dose function and anisotropy functions. Radial dose function of the 192Ir source is less steep than both 60Co source models. In addition, the 60Co sources are showing more isotropic dose distribution than the 192Ir source. Conclusions The superimposition method is applicable to produce dose distributions for other source arrangements from the dose distribution of a single source. The calculated dosimetric quantities of this new source can be introduced as input data to the GZP6 treatment planning system (TPS) and to validate the performance of the TPS. PMID:23077455
Perfetti, Christopher M; Martin, William R; Rearden, Bradley T; Williams, Mark L
2012-01-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.
Measuring Stellar Radial Velocity using Markov Chain Monte Carlo(MCMC) Method
NASA Astrophysics Data System (ADS)
Song, Yihan; Luo, Ali; Zhao, Yongheng
2014-01-01
Stellar radial velocity is estimated by using template fitting and Markov Chain Monte Carlo(MCMC) methods. This method works on the LAMOST stellar spectra. The MCMC simulation generates a probability distribution of the RV. The RV error can also computed from distribution.
A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha
ERIC Educational Resources Information Center
Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T.
2010-01-01
The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in…
A Monte Carlo Synthetic Acceleration Method for the Non-Linear, Time-Dependent Diffusion Equation
Evans, Thomas M; Mosher, Scott W
2009-01-01
We present a synthetic-acceleration based Monte Carlo method for solving the 1T thermal radiation diffusion equations. We show that this method can be an effective solver for sparse matrix systems. We also discuss its general applicability to broader classes of problems.
Kinetic Monte Carlo method for dislocation migration in the presence of solute Chaitanya S. Deo
Cai, Wei
of the motion of a 111 -oriented screw dislocation on a {011}-slip plane in body-centered-cubic Mo-based alloysKinetic Monte Carlo method for dislocation migration in the presence of solute Chaitanya S. Deo method for simulating dislocation motion in alloys within the framework of the kink model. The model
A Monte Carlo Method Used for the Identification of the Muscle Spindle
Rigas, Alexandros
21 A Monte Carlo Method Used for the Identification of the Muscle Spindle Vassiliki K. Kotti the behavior of the muscle spindle by using a logistic regression model. The system receives input from, the recovery and the summation functions. The most favorable method of estimating the parameters of the muscle
Comparison of the Monte Carlo adjoint-weighted and differential operator perturbation methods
Kiedrowski, Brian C; Brown, Forrest B
2010-01-01
Two perturbation theory methodologies are implemented for k-eigenvalue calculations in the continuous-energy Monte Carlo code, MCNP6. A comparison of the accuracy of these techniques, the differential operator and adjoint-weighted methods, is performed numerically and analytically. Typically, the adjoint-weighted method shows better performance over a larger range; however, there are exceptions.
Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
McKinley, M S; Brooks III, E D; Daffin, F
2004-12-13
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.
High-order path-integral Monte Carlo methods for solving quantum dot problems
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2015-03-01
The conventional second-order path-integral Monte Carlo method is plagued with the sign problem in solving many-fermion systems. This is due to the large number of antisymmetric free-fermion propagators that are needed to extract the ground state wave function at large imaginary time. In this work we show that optimized fourth-order path-integral Monte Carlo methods, which use no more than five free-fermion propagators, can yield accurate quantum dot energies for up to 20 polarized electrons with the use of the Hamiltonian energy estimator.
Quantum-trajectory Monte Carlo method for study of electron-crystal interaction in STEM.
Ruan, Z; Zeng, R G; Ming, Y; Zhang, M; Da, B; Mao, S F; Ding, Z J
2015-07-21
In this paper, a novel quantum-trajectory Monte Carlo simulation method is developed to study electron beam interaction with a crystalline solid for application to electron microscopy and spectroscopy. The method combines the Bohmian quantum trajectory method, which treats electron elastic scattering and diffraction in a crystal, with a Monte Carlo sampling of electron inelastic scattering events along quantum trajectory paths. We study in this work the electron scattering and secondary electron generation process in crystals for a focused incident electron beam, leading to understanding of the imaging mechanism behind the atomic resolution secondary electron image that has been recently achieved in experiment with a scanning transmission electron microscope. According to this method, the Bohmian quantum trajectories have been calculated at first through a wave function obtained via a numerical solution of the time-dependent Schrödinger equation with a multislice method. The impact parameter-dependent inner-shell excitation cross section then enables the Monte Carlo sampling of ionization events produced by incident electron trajectories travelling along atom columns for excitation of high energy knock-on secondary electrons. Following cascade production, transportation and emission processes of true secondary electrons of very low energies are traced by a conventional Monte Carlo simulation method to present image signals. Comparison of the simulated image for a Si(110) crystal with the experimental image indicates that the dominant mechanism of atomic resolution of secondary electron image is the inner-shell ionization events generated by a high-energy electron beam. PMID:26082190
A quasi-Monte Carlo method for multicriteria optimization
Athan, T.
1994-12-31
Multicriteria optimization treats design problems with multiple non-commensurable design criteria. These methods are unwieldy when the number of criteria is large and when multiple design configurations are available. The Quasi-Random Weighted Criteria method uses quasi-random sequences to direct a series of optimizations. The method generates for each configuration a set of candidate solutions that are representative of the range of available solutions. This can aid in the selection between configurations. The explicit determination of parameter values to represent decision maker preferences, an intermediate step in most methods, is eliminated. Instead, a preferred solution can be selected directly. Iterative use finds additional solutions near the selected point.
Markov Chain Mote Carlo solution of BK equation through Newton-Kantorovich method
Krzysztof Bozek; Krzysztof Kutak; Wieslaw Placzek
2013-06-28
We propose a new method for Monte Carlo solution of non-linear integral equations by combining the Newton-Kantorovich method for solving non-linear equations with the Markov Chain Monte Carlo (MCMC) method for solving linear equations. The Newton-Kantorovich method allows to express the non-linear equation as a system of the linear equations which then can be treated by the MCMC (random walk) algorithm. We apply this method to the Balitsky-Kovchegov (BK) equation describing evolution of gluon density at low x. Results of numerical computations show that the MCMC method is both precise and efficient. The presented algorithm may be particularly suited for solving more complicated and higher-dimensional non-linear integral equation, for which traditional methods become unfeasible.
A multi-group Monte Carlo core analysis method and its application in SCWR design
Zhang, P.; Wang, K.; Yu, G.
2012-07-01
Complex geometry and spectrum have been the characteristics of many newly developed nuclear energy systems, so the suitability and precision of the traditional deterministic codes are doubtable while being applied to simulate these systems. On the contrary, the Monte Carlo method has the inherent advantages of dealing with complex geometry and spectrum. The main disadvantage of Monte Carlo method is that it takes long time to get reliable results, so the efficiency is too low for the ordinary core designs. A new Monte Carlo core analysis scheme is developed, aimed to increase the calculation efficiency. It is finished in two steps: Firstly, the assembly level simulation is performed by continuous energy Monte Carlo method, which is suitable for any geometry and spectrum configuration, and the assembly multi-group constants are tallied at the same time; Secondly, the core level calculation is performed by multi-group Monte Carlo method, using the assembly group constants generated in the first step. Compared with the heterogeneous Monte Carlo calculations of the whole core, this two-step scheme is more efficient, and the precision is acceptable for the preliminary analysis of novel nuclear systems. Using this core analysis scheme, a SCWR core was designed based on a new SCWR assembly design. The core output is about 1,100 MWe, and a cycle length of about 550 EFPDs can be achieved with 3-batch refueling pattern. The average and maximum discharge burn-up are about 53.5 and 60.9 MWD/kgU respectively. (authors)
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
Sequential Monte Carlo Methods With Applications To Communication Channels
Boddikurapati, Sirish
2010-07-14
to achieve this by incorporating noisy observations as they become available with prior knowledge of the system model. Bayesian methods provide a general framework for dynamic state estimation problems. The central idea behind this recursive Bayesian...
ERIC Educational Resources Information Center
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
Investigating the Limits of Monte Carlo Tree Search Methods in Computer Go
Müller, Martin
Investigating the Limits of Monte Carlo Tree Search Methods in Computer Go Shih-Chieh Huang1 progress in Com- puter Go. Still, program performance is uneven - most current Go pro- grams are much techniques, several conjectures regard- ing the behavior of MCTS-based Go programs in specific types of Go
A mean field theory of sequential Monte Carlo methods P. Del Moral
Del Moral , Pierre
A mean field theory of sequential Monte Carlo methods P. Del Moral INRIA, Centre Bordeaux-Sud Ouest Moral (INRIA Bordeaux) INRIA Centre Bordeaux-Sud Ouest, France 1 / 32 #12;Outline 1 Foundations Moral (INRIA Bordeaux) INRIA Centre Bordeaux-Sud Ouest, France 2 / 32 #12;Summary 1 Foundations
Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water
ERIC Educational Resources Information Center
Gergely, John Robert
2009-01-01
Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"…
Waltham, Chris
Robust Signal Extraction Methods and Monte Carlo Sensitivity Studies for the Sudbury Neutrino+. An important part of the SNO+ physics program will be a search for neutrinoless double beta decay, carried out collaborators, as well as great people to be around. In my experience, the Queen's Particle Astro- physics Group
Variance reduction methods applied to deep-penetration Monte Carlo problems
Cramer, S.N.; Tang, J.S.
1986-01-01
A review of standard variance reduction methods for deep-penetration Monte Carlo calculations is presented. Comparisons and contrasts are made with methods for nonpenetration and reactor core problems. Difficulties and limitations of the Monte Carlo method for deep-penetration calculations are discussed in terms of transport theory, statistical uncertainty and computing technology. Each aspect of a Monte Carlo code calculation is detailed, including the natural and biased forms of (1) the source description, (2) the transport process, (3) the collision process, and (4) the estimation process. General aspects of cross-section data use and geometry specification are also discussed. Adjoint calculations are examined in the context of both complete calculations and approximate calculations for use as importance functions for forward calculations. The idea of importance and the realization of the importance function are coverd in both general and mathematical terms. Various methods of adjoint importance generation and its implementation are covered, including the simultaneous generation of both forward and adjoint fluxes in one calculation. A review of the current literature on mathematical aspects of variance reduction and statistical uncertainty is given. Three widely used Monte Carlo codes MCNP, MORSE, and TRIPOLI - are compared and contrasted in connection with many of the specific items discussed throughout the presentation. 75 refs., 16 figs.
Sequential Monte Carlo Methods for Normalized Random Measure with Independent Increments
Del Moral , Pierre
; Dirichlet process; Normalized gen- eralized Gamma process; Clustering time series; Slice sampling process mixture (Es- cobar and West, 1995) where w1, w2, . . . are derived from the Dirichlet processSequential Monte Carlo Methods for Normalized Random Measure with Independent Increments Mixtures J
Monte Carlo Methods for Equilibrium and Nonequilibrium Problems in Interfacial Electrochemistry
Gregory Brown; Per Arne Rikvold; S. J. Mitchell; M. A. Novotny
1998-05-11
We present a tutorial discussion of Monte Carlo methods for equilibrium and nonequilibrium problems in interfacial electrochemistry. The discussion is illustrated with results from simulations of three specific systems: bromine adsorption on silver (100), underpotential deposition of copper on gold (111), and electrodeposition of urea on platinum (100).
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
ERIC Educational Resources Information Center
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…
NASA Astrophysics Data System (ADS)
Dixon, D. A.; Prinja, A. K.; Franke, B. C.
2015-09-01
This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.
A Hamiltonian Monte-Carlo method for Bayesian inference of supermassive black hole binaries
NASA Astrophysics Data System (ADS)
Porter, Edward K.; Carré, Jérôme
2014-07-01
We investigate the use of a Hamiltonian Monte-Carlo to map out the posterior density function for supermassive black hole binaries. While previous Markov Chain Monte-Carlo (MCMC) methods, such as Metropolis-Hastings MCMC, have been successfully employed for a number of different gravitational wave sources, these methods are essentially random walk algorithms. The Hamiltonian Monte-Carlo treats the inverse likelihood surface as a ‘gravitational potential’ and by introducing canonical positions and momenta, dynamically evolves the Markov chain by solving Hamilton's equations of motion. This method is not as widely used as other MCMC algorithms due to the necessity of calculating gradients of the log-likelihood, which for most applications results in a bottleneck that makes the algorithm computationally prohibitive. We circumvent this problem by using accepted initial phase-space trajectory points to analytically fit for each of the individual gradients. Eliminating the waveform generation needed for the numerical derivatives reduces the total number of required templates for a {{10}^{6}} iteration chain from \\sim {{10}^{9}} to \\sim {{10}^{6}}. The result is in an implementation of the Hamiltonian Monte-Carlo that is faster, and more efficient by a factor of approximately the dimension of the parameter space, than a Hessian MCMC.
NASA Astrophysics Data System (ADS)
Kim, Minho; Lee, Hyounggun; Kim, Hyosim; Park, Hongmin; Lee, Wonho; Park, Sungho
2014-03-01
This study evaluated the Monte Carlo method for determining the dose calculation in fluoroscopy by using a realistic human phantom. The dose was calculated by using Monte Carlo N-particle extended (MCNPX) in simulations and was measured by using Korean Typical Man-2 (KTMAN-2) phantom in the experiments. MCNPX is a widely-used simulation tool based on the Monte-Carlo method and uses random sampling. KTMAN-2 is a virtual phantom written in MCNPX language and is based on the typical Korean man. This study was divided into two parts: simulations and experiments. In the former, the spectrum generation program (SRS-78) was used to obtain the output energy spectrum for fluoroscopy; then, each dose to the target organ was calculated using KTMAN-2 with MCNPX. In the latter part, the output of the fluoroscope was calibrated first and TLDs (Thermoluminescent dosimeter) were inserted in the ART (Alderson Radiation Therapy) phantom at the same places as in the simulation. Thus, the phantom was exposed to radiation, and the simulated and the experimental doses were compared. In order to change the simulation unit to the dose unit, we set the normalization factor (NF) for unit conversion. Comparing the simulated with the experimental results, we found most of the values to be similar, which proved the effectiveness of the Monte Carlo method in fluoroscopic dose evaluation. The equipment used in this study included a TLD, a TLD reader, an ART phantom, an ionization chamber and a fluoroscope.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I
2014-06-15
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters pr
Driscoll, Toby
Searching for Rare Growth Factors Using Multicanonical Monte Carlo Methods Author(s): Tobin AIndustrialandAppliedMathematics Vol.49,No. 4, pp.673-692 Searching forRare Growth Factors Using Multicanonical Monte Carlo Methods* TobinA. Driscolit Kara L. Makit Abstract. The growth factor of a matrix quantifies the amount
Monte-Carlo methods make Dempster-Shafer formalism feasible
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.; Bernat, Andrew; Borrett, Walter; Mariscal, Yvonne; Villa, Elsa
1991-01-01
One of the main obstacles to the applications of Dempster-Shafer formalism is its computational complexity. If we combine m different pieces of knowledge, then in general case we have to perform up to 2(sup m) computational steps, which for large m is infeasible. For several important cases algorithms with smaller running time were proposed. We prove, however, that if we want to compute the belief bel(Q) in any given query Q, then exponential time is inevitable. It is still inevitable, if we want to compute bel(Q) with given precision epsilon. This restriction corresponds to the natural idea that since initial masses are known only approximately, there is no sense in trying to compute bel(Q) precisely. A further idea is that there is always some doubt in the whole knowledge, so there is always a probability p(sub o) that the expert's knowledge is wrong. In view of that it is sufficient to have an algorithm that gives a correct answer a probability greater than 1-p(sub o). If we use the original Dempster's combination rule, this possibility diminishes the running time, but still leaves the problem infeasible in the general case. We show that for the alternative combination rules proposed by Smets and Yager feasible methods exist. We also show how these methods can be parallelized, and what parallelization model fits this problem best.
In silico prediction of the ?-cyclodextrin complexation based on Monte Carlo method.
Veselinovi?, Aleksandar M; Veselinovi?, Jovana B; Toropov, Andrey A; Toropova, Alla P; Nikoli?, Goran M
2015-11-10
In this study QSPR models were developed to predict the complexation of structurally diverse compounds with ?-cyclodextrin based on SMILES notation optimal descriptors using Monte Carlo method. The predictive potential of the applied approach was tested with three random splits into the sub-training, calibration, test and validation sets and with different statistical methods. Obtained results demonstrate that Monte Carlo method based modeling is a very promising computational method in the QSPR studies for predicting the complexation of structurally diverse compounds with ?-cyclodextrin. The SMILES attributes (structural features both local and global), defined as molecular fragments, which are promoters of the increase/decrease of molecular binding constants were identified. These structural features were correlated to the complexation process and their identification helped to improve the understanding for the complexation mechanisms of the host molecules. PMID:26320546
Perfetti, C.; Martin, W.; Rearden, B.; Williams, M.
2012-07-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)
Frequency-domain Monte Carlo method for linear oscillatory gas flows
NASA Astrophysics Data System (ADS)
Ladiges, Daniel R.; Sader, John E.
2015-03-01
Gas flows generated by resonating nanoscale devices inherently occur in the non-continuum, low Mach number regime. Numerical simulations of such flows using the standard direct simulation Monte Carlo (DSMC) method are hindered by high statistical noise, which has motivated the development of several alternate Monte Carlo methods for low Mach number flows. Here, we present a frequency-domain low Mach number Monte Carlo method based on the Boltzmann-BGK equation, for the simulation of oscillatory gas flows. This circumvents the need for temporal simulations, as is currently required, and provides direct access to both amplitude and phase information using a pseudo-steady algorithm. The proposed method is validated for oscillatory Couette flow and the flow generated by an oscillating sphere. Good agreement is found with an existing time-domain method and accurate numerical solutions of the Boltzmann-BGK equation. Analysis of these simulations using a rigorous statistical approach shows that the frequency-domain method provides a significant improvement in computational speed.
NASA Astrophysics Data System (ADS)
Nakano, Shinya; Suzuki, Kazue; Kawamura, Kenji; Parrenin, Frederic; Higuchi, Tomoyuki
2015-04-01
A technique for estimating the age-depth relationship and its uncertainty in ice cores has been developed. The age-depth relationship is mainly determined by the accumulation of snow at the site of the ice core and the thinning process due to the horizontal stretching and vertical compression of an ice layer. However, both the accumulation process and the thinning process are not fully known. In order to appropriately estimate the age as a function of depth, it is crucial to incorporate observational information into a model describing the accumulation and thinning processes. In the proposed technique, the age as a function of depth is estimated from age markers and time series of ?18O data. The estimation is achieved using a method combining a sequential Monte Carlo method and the Markov chain Monte Carlo method as proposed by Andrieu et al. (2010). In this hybrid method, the posterior distributions for the parameters in the models for the accumulation and thinning processes are basically computed using a way of the standard Metropolis-Hastings method. Meanwhile, sampling from the posterior distribution for the age-depth relationship is achieved by using a sequential Monte Carlo approach at each iteration of the Metropolis-Hastings method. A sequential Monte Carlo method normally suffers from the degeneracy problem, especially in cases that the number of steps is large. However, when it is combined with the Metropolis-Hastings method, the degeneracy problem can be overcome by collecting a large number of samples obtained by many iterations of the Metropolis-Hastings method. We will demonstrate the result obtained by applying the proposed technique to the ice core data from Dome Fuji in Antarctica.
A Monte Carlo method for solving the one-dimensional telegraph equations with boundary conditions
NASA Astrophysics Data System (ADS)
Acebrón, Juan A.; Ribeiro, Marco A.
2016-01-01
A Monte Carlo algorithm is derived to solve the one-dimensional telegraph equations in a bounded domain subject to resistive and non-resistive boundary conditions. The proposed numerical scheme is more efficient than the classical Kac's theory because it does not require the discretization of time. The algorithm has been validated by comparing the results obtained with theory and the Finite-difference time domain (FDTD) method for a typical two-wire transmission line terminated at both ends with general boundary conditions. We have also tested transmission line heterogeneities to account for wave propagation in multiple media. The algorithm is inherently parallel, since it is based on Monte Carlo simulations, and does not suffer from the numerical dispersion and dissipation issues that arise in finite difference-based numerical schemes on a lossy medium. This allowed us to develop an efficient numerical method, capable of outperforming the classical FDTD method for large scale problems and high frequency signals.
Kennedy, R R; Baker, A B
1993-09-01
We have developed three models which describe the relationship between cardiac output and the uptake of volatile anaesthetic agents, based on the Fick equation, and determined if these models could provide useful methods of cardiac output measurement. Because many variables are involved in the calculation of cardiac output using these methods, a "Monte Carlo" simulation was performed to investigate the combined effect of uncertainties in several variables on the resultant cardiac output estimate. We found that the single-breath model was most accurate when the inspired concentration was large, while the rebreathing model was better with smaller inspired concentrations. The three-breath model was the least accurate under all conditions studied. Volatile anaesthetics were generally more accurate than nitrous oxide, with both enflurane and halothane more accurate than isoflurane. The "Monte Carlo" technique provides a valuable tool for analysis of errors in measurement methods. PMID:8398524
Monte Carlo method of radiative transfer applied to a turbulent flame modeling with LES
NASA Astrophysics Data System (ADS)
Zhang, Jin; Gicquel, Olivier; Veynante, Denis; Taine, Jean
2009-06-01
Radiative transfer plays an important role in the numerical simulation of turbulent combustion. However, for the reason that combustion and radiation are characterized by different time scales and different spatial and chemical treatments, the radiation effect is often neglected or roughly modelled. The coupling of a large eddy simulation combustion solver and a radiation solver through a dedicated language, CORBA, is investigated. Two formulations of Monte Carlo method (Forward Method and Emission Reciprocity Method) employed to resolve RTE have been compared in a one-dimensional flame test case using three-dimensional calculation grids with absorbing and emitting media in order to validate the Monte Carlo radiative solver and to choose the most efficient model for coupling. Then the results obtained using two different RTE solvers (Reciprocity Monte Carlo method and Discrete Ordinate Method) applied on a three-dimensional flame holder set-up with a correlated-k distribution model describing the real gas medium spectral radiative properties are compared not only in terms of the physical behavior of the flame, but also in computational performance (storage requirement, CPU time and parallelization efficiency). To cite this article: J. Zhang et al., C. R. Mecanique 337 (2009).
Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe
NASA Astrophysics Data System (ADS)
Martin, Nicolas
This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic effects of the scattering reaction consistent with the subgroup method. In this study, we generalize the Discrete Angle Technique, already proposed for homogeneous, multigroup cross sections, to isotopic cross sections on the form of probability tables. In this technique, the angular density is discretized into probability tables. Similarly to the cross-section case, a moment approach is used to compute the probability tables for the scattering cosine. (4) The introduction of a leakage model based on the B1 fundamental mode approximation. Unlike deterministic lattice packages, most Monte Carlo-based lattice physics codes do not include leakage models. However the generation of homogenized and condensed group constants (cross sections, diffusion coefficients) require the critical flux. This project has involved the development of a program into the DRAGON framework, written in Fortran 2003 and wrapped with a driver in C, the GANLIB 5. Choosing Fortran 2003 has permitted the use of some modern features, such as the definition of objects and methods, data encapsulation and polymorphism. The validation of the proposed code has been performed by comparison with other numerical methods: (1) The continuous-energy Monte Carlo method of the SERPENT code. (2) The Collision Probability (CP) method and the discrete ordinates (SN) method of the DRAGON lattice code. (3) The multigroup Monte Carlo code MORET, coupled with the DRAGON code. Benchmarks used in this work are representative of some industrial configurations encountered in reactor and criticality-safety calculations: (1)Pressurized Water Reactors (PWR) cells and assemblies. (2) Canada-Deuterium Uranium Reactors (CANDU-6) clusters. (3) Critical experiments from the ICSBEP handbook (International Criticality Safety Benchmark Evaluation Program).
Computing the principal eigenelements of some linear operators using a branching Monte Carlo method
Lejay, Antoine Maire, Sylvain
2008-12-01
In earlier work, we developed a Monte Carlo method to compute the principal eigenvalue of linear operators, which was based on the simulation of exit times. In this paper, we generalize this approach by showing how to use a branching method to improve the efficacy of simulating large exit times for the purpose of computing eigenvalues. Furthermore, we show that this new method provides a natural estimation of the first eigenfunction of the adjoint operator. Numerical examples of this method are given for the Laplace operator and an homogeneous neutron transport operator.
Isospin-projected nuclear level densities by the shell model Monte Carlo method
H. Nakada; Y. Alhassid
2008-09-24
We have developed an efficient isospin projection method in the shell model Monte Carlo approach for isospin-conserving Hamiltonians. For isoscalar observables this projection method has the advantage of being exact sample by sample. The isospin projection method allows us to take into account the proper isospin dependence of the nuclear interaction, thus avoiding a sign problem that such an interaction introduces in unprojected calculations. We apply our method in the calculation of the isospin dependence of level densities in the complete $pf+g_{9/2}$ shell. We find that isospin-dependent corrections to the total level density are particularly important for $N \\sim Z$ nuclei.
NASA Astrophysics Data System (ADS)
Jing, Hui; Li, Cong; Kuang, Bing; Huang, Meifa; Zhong, Yanru
2012-09-01
The way of measuring diameter by use of measuring bow height and chord length is commonly adopted for the large diameter work piece. In the process of computing the diameter of large work piece, measurement uncertainty is an important parameter and is always employed to evaluate the reliability of the measurement results. Therefore, it is essential to present reliable methods to evaluate the measurement uncertainty, especially in precise measurement. Because of the limitations of low convergence and unstable results of the Monte-Carlo (MC) method, the quasi-Monte-Carlo (QMC) method is used to estimate the measurement uncertainty. The QMC method is an improvement of the ordinary MC method which employs highly uniform quasi random numbers to replace MC's pseudo random numbers. In the process of evaluation, first, more homogeneous random numbers (quasi random numbers) are generated based on Halton's sequence. Then these random numbers are transformed into the desired distribution random numbers. The desired distribution random numbers are used to simulate the measurement errors. By computing the simulation results, measurement uncertainty can be obtained. An experiment of cylinder diameter measurement and its uncertainty evaluation are given. In the experiment, the guide to the expression of uncertainty in measurement method, MC method, and QMC method are validated. The result shows that the QMC method has a higher convergence rate and more stable evaluation results than that of the MC method. Therefore, the QMC method can be applied effectively to evaluate the measurement uncertainty.
Monte Carlo method for photon heating using temperature-dependent optical properties.
Slade, Adam Broadbent; Aguilar, Guillermo
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. PMID:25488656
NASA Astrophysics Data System (ADS)
Nakatsuka, Yutaka; Nakajima, Takahito
2012-10-01
A diffusion Monte Carlo (DMC) method for the relativistic zeroth-order regular approximation (ZORA) is proposed. In this scheme, a novel approximate Green's function is derived for the spin-free ZORA Hamiltonian. Several numerical tests on atoms and small molecules showed that by combining with the relativistic cusp-correction scheme, the present approach can include both relativistic and electron-correlation effects simultaneously. The correlation energies recovered by the ZORA-DMC method are comparable with the nonrelativistic DMC results and superior to the coupled cluster singles and doubles with perturbative triples correction results when the correlation-consistent polarized valence triple-zeta Douglas-Kroll basis set is used. For the heavier CuH molecule, the ZORA-DMC estimation of its dissociation energy agrees with the experimental value within the error bar.
Molecular simulation of shocked materials using the reactive Monte Carlo method.
Brennan, John K; Rice, Betsy M
2002-08-01
We demonstrate the applicability of the reactive Monte Carlo (RxMC) simulation method [J. K. Johnson, A. Z. Panagiotopoulos, and K. E. Gubbins, Mol. Phys. 81, 717 (1994); W. R. Smith and B. Tríska, J. Chem. Phys. 100, 3019 (1994)] for calculating the shock Hugoniot properties of a material. The method does not require interaction potentials that simulate bond breaking or bond formation; it requires only the intermolecular potentials and the ideal-gas partition functions for the reactive species that are present. By performing Monte Carlo sampling of forward and reverse reaction steps, the RxMC method provides information on the chemical equilibria states of the shocked material, including the density of the reactive mixture and the mole fractions of the reactive species. We illustrate the methodology for two simple systems (shocked liquid NO and shocked liquid N2), where we find excellent agreement with experimental measurements. The results show that the RxMC methodology provides an important simulation tool capable of testing models used in current detonation theory predictions. Further applications and extensions of the reactive Monte Carlo method are discussed. PMID:12241148
High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2015-03-01
In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.
Methods for Monte Carlo simulation of the exospheres of the moon and Mercury
NASA Technical Reports Server (NTRS)
Hodges, R. R., Jr.
1980-01-01
A general form of the integral equation of exospheric transport on moon-like bodies is derived in a form that permits arbitrary specification of time varying physical processes affecting atom creation and annihilation, atom-regolith collisions, adsorption and desorption, and nonplanetocentric acceleration. Because these processes usually defy analytic representation, the Monte Carlo method of solution of the transport equation, the only viable alternative, is described in detail, with separate discussions of the methods of specification of physical processes as probabalistic functions. Proof of the validity of the Monte Carlo exosphere simulation method is provided in the form of a comparison of analytic and Monte Carlo solutions to three classical, and analytically tractable, exosphere problems. One of the key phenomena in moonlike exosphere simulations, the distribution of velocities of the atoms leaving a regolith, depends mainly on the nature of collisions of free atoms with rocks. It is shown that on the moon and Mercury, elastic collisions of helium atoms with a Maxwellian distribution of vibrating, bound atoms produce a nearly Maxwellian distribution of helium velocities, despite the absence of speeds in excess of escape in the impinging helium velocity distribution.
Quantum Monte-Carlo method applied to Non-Markovian barrier transmission
G. Hupin; D. Lacroix
2010-01-05
In nuclear fusion and fission, fluctuation and dissipation arise due to the coupling of collective degrees of freedom with internal excitations. Close to the barrier, both quantum, statistical and non-Markovian effects are expected to be important. In this work, a new approach based on quantum Monte-Carlo addressing this problem is presented. The exact dynamics of a system coupled to an environment is replaced by a set of stochastic evolutions of the system density. The quantum Monte-Carlo method is applied to systems with quadratic potentials. In all range of temperature and coupling, the stochastic method matches the exact evolution showing that non-Markovian effects can be simulated accurately. A comparison with other theories like Nakajima-Zwanzig or Time-ConvolutionLess ones shows that only the latter can be competitive if the expansion in terms of coupling constant is made at least to fourth order. A systematic study of the inverted parabola case is made at different temperatures and coupling constants. The asymptotic passing probability is estimated in different approaches including the Markovian limit. Large differences with the exact result are seen in the latter case or when only second order in the coupling strength is considered as it is generally assumed in nuclear transport models. On opposite, if fourth order in the coupling or quantum Monte-Carlo method is used, a perfect agreement is obtained.
Level Densities of Heavy Nuclei by the Shell Model Monte Carlo Method
Y. Alhassid; C. Özen; H. Nakada
2013-05-24
The microscopic calculation of nuclear level densities in the presence of correlations is a difficult many-body problem. The shell model Monte Carlo method provides a powerful technique to carry out such calculations using the framework of the configuration-interaction shell model in spaces that are many orders of magnitude larger than spaces that can be treated by conventional methods. We present recent applications of the method to the calculation of level densities and their collective enhancement factors in heavy nuclei. The calculated level densities are in close agreement with experimental data.
NASA Astrophysics Data System (ADS)
Zhong, Zhaopeng; Talamo, Alberto; Gohar, Yousry
2013-07-01
The effective delayed neutron fraction ? plays an important role in kinetics and static analysis of the reactor physics experiments. It is used as reactivity unit referred to as "dollar". Usually, it is obtained by computer simulation due to the difficulty in measuring it experimentally. In 1965, Keepin proposed a method, widely used in the literature, for the calculation of the effective delayed neutron fraction ?. This method requires calculation of the adjoint neutron flux as a weighting function of the phase space inner products and is easy to implement by deterministic codes. With Monte Carlo codes, the solution of the adjoint neutron transport equation is much more difficult because of the continuous-energy treatment of nuclear data. Consequently, alternative methods, which do not require the explicit calculation of the adjoint neutron flux, have been proposed. In 1997, Bretscher introduced the k-ratio method for calculating the effective delayed neutron fraction; this method is based on calculating the multiplication factor of a nuclear reactor core with and without the contribution of delayed neutrons. The multiplication factor set by the delayed neutrons (the delayed multiplication factor) is obtained as the difference between the total and the prompt multiplication factors. Using Monte Carlo calculation Bretscher evaluated the ? as the ratio between the delayed and total multiplication factors (therefore the method is often referred to as the k-ratio method). In the present work, the k-ratio method is applied by Monte Carlo (MCNPX) and deterministic (PARTISN) codes. In the latter case, the ENDF/B nuclear data library of the fuel isotopes (235U and 238U) has been processed by the NJOY code with and without the delayed neutron data to prepare multi-group WIMSD neutron libraries for the lattice physics code DRAGON, which was used to generate the PARTISN macroscopic cross sections. In recent years Meulekamp and van der Marck in 2006 and Nauchi and Kameyama in 2005 proposed new methods for the effective delayed neutron fraction calculation with only one Monte Carlo computer simulation, compared with the k-ratio method which require two criticality calculations. In this paper, the Meulekamp/Marck and Nauchi/Kameyama methods are applied for the first time by the MCNPX computer code and the results obtained by all different methods are compared.
Isospin-projected nuclear level densities by the shell model Monte Carlo method
Nakada, H.; Alhassid, Y.
2008-11-15
We have developed an efficient isospin projection method in the shell model Monte Carlo approach for isospin-conserving Hamiltonians. For isoscalar observables this method has the advantage of being exact sample by sample. It allows us to take into account the proper isospin dependence of the nuclear interaction, thus avoiding a sign problem that such an interaction introduces in unprojected calculations. We apply the method to calculate the isospin dependence of level densities in the complete pf+g{sub 9/2} shell. We find that isospin-dependent corrections to the total level density are particularly important for N{approx}Z nuclei.
Monte Carlo Methods in Materials Science Based on FLUKA and ROOT
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor
2003-01-01
A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the CERN ALICE (A Large Ion Collisions Experiment) software team through an adaptation of their existing AliROOT (ALICE Using ROOT) architecture. In order to check our progress against actual data, we have chosen to simulate the ATIC14 (Advanced Thin Ionization Calorimeter) cosmic-ray astrophysics balloon payload as well as neutron fluences in the Mir spacecraft. This paper contains a summary of status of this project, and a roadmap to its successful completion.
Visual improvement for bad handwriting based on Monte-Carlo method
NASA Astrophysics Data System (ADS)
Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua
2014-03-01
A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.
Quasiclassical-trajectory Monte Carlo methods for collisions with two-electron atoms
Cohen, J.S.
1996-07-01
A quasiclassical-trajectory Monte Carlo (QTMC-EB) model is proposed to extend the classical-trajectory Monte Carlo (CTMC) method to targets having more than one electron. Quasiclassical stability is achieved via constraining potentials that enforce lower {ital energy} {ital bounds} on the one-electron dynamics. Cross sections for all possible electronic rearrangements (single and double electron transfer, single and double ionization, and transfer ionization) in H{sup +}+H, H{sup +}+He, He{sup 2+}+He, and Li{sup 3+}+He collisions are calculated with this model and with the previously proposed model (QTMC-KW) of Kirschbaum and Wilets [Phys. Rev. A {bold 21}, 834 (1980)]. The results are compared with accurate experimental data. The regime of validity for the two-electron targets is found to be similar to that of the usual CTMC model for one-electron targets. {copyright} {ital 1996 The American Physical Society.}
A study of potential energy curves from the model space quantum Monte Carlo method.
Ohtsuka, Yuhki; Ten-No, Seiichiro
2015-12-01
We report on the first application of the model space quantum Monte Carlo (MSQMC) to potential energy curves (PECs) for the excited states of C2, N2, and O2 to validate the applicability of the method. A parallel MSQMC code is implemented with the initiator approximation to enable efficient sampling. The PECs of MSQMC for various excited and ionized states are compared with those from the Rydberg-Klein-Rees and full configuration interaction methods. The results indicate the usefulness of MSQMC for precise PECs in a wide range obviating problems concerning quasi-degeneracy. PMID:26646869
Inverse trishear modeling of bedding dip data using Markov chain Monte Carlo methods
NASA Astrophysics Data System (ADS)
Oakley, David O. S.; Fisher, Donald M.
2015-11-01
We present a method for fitting trishear models to surface profile data, by restoring bedding dip data and inverting for model parameters using a Markov chain Monte Carlo method. Trishear is a widely-used kinematic model for fault-propagation folds. It lacks an analytic solution, but a variety of data inversion techniques can be used to fit trishear models to data. Where the geometry of an entire folded bed is known, models can be tested by restoring the bed to its pre-folding orientation. When data include bedding attitudes, however, previous approaches have relied on computationally-intensive forward modeling. This paper presents an equation for the rate of change of dip in the trishear zone, which can be used to restore dips directly to their pre-folding values. The resulting error can be used to calculate a probability for each model, which allows solution by Markov chain Monte Carlo methods and inversion of datasets that combine dips and contact locations. These methods are tested using synthetic and real datasets. Results are used to approximate multimodal probability density functions and to estimate uncertainty in model parameters. The relative value of dips and contacts in constraining parameters and the effects of uncertainty in the data are investigated.
Parsons, Tom
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Parsons, T.
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
NASA Astrophysics Data System (ADS)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
A steady-state convergence detection method for Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Karchani, Abolfazl; Ejtehadi, Omid; Myong, Rho Shin
2014-12-01
In the direct simulation Monte Carlo (DSMC), exclusion of microscopic data sampled in the unsteady phase can accelerate the convergence and lead to more accurate results in the steady state problem. In this study, a new method for detection of the steady state onset, called Probabilistic Automatic Reset Sampling (PARS), is introduced. The new method can detect the steady state automatically and reset sample after satisfying the reset criteria based on statistics. The method is simple and does not need any user-specified inputs. The simulation results show that the proposed strategy can work well even in condition with constant number of particles inside the domain which was the main drawback of the previous methods.
Figueira, C; Di Maria, S; Baptista, M; Mendes, M; Madeira, P; Vaz, P
2015-07-01
Computed tomography (CT) is one of the most used techniques in medical diagnosis, and its use has become one of the main sources of exposure of the population to ionising radiation. This work concentrates on the paediatric patients, since children exhibit higher radiosensitivity than adults. Nowadays, patient doses are estimated through two standard CT dose index (CTDI) phantoms as a reference to calculate CTDI volume (CTDI vol) values. This study aims at improving the knowledge about the radiation exposure to children and to better assess the accuracy of the CTDI vol method. The effectiveness of the CTDI vol method for patient dose estimation was then investigated through a sensitive study, taking into account the doses obtained by three methods: CTDI vol measured, CTDI vol values simulated with Monte Carlo (MC) code MCNPX and the recent proposed method Size-Specific Dose Estimate (SSDE). In order to assess organ doses, MC simulations were executed with paediatric voxel phantoms. PMID:25883302
Hunter, J. L.; Sutton, T. M.
2013-07-01
In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amount of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)
Green, P. L.; Worden, K.
2015-01-01
In this paper, the authors outline the general principles behind an approach to Bayesian system identification and highlight the benefits of adopting a Bayesian framework when attempting to identify models of nonlinear dynamical systems in the presence of uncertainty. It is then described how, through a summary of some key algorithms, many of the potential difficulties associated with a Bayesian approach can be overcome through the use of Markov chain Monte Carlo (MCMC) methods. The paper concludes with a case study, where an MCMC algorithm is used to facilitate the Bayesian system identification of a nonlinear dynamical system from experimentally observed acceleration time histories. PMID:26303916
Simulation of light-field camera imaging based on ray splitting Monte Carlo method
NASA Astrophysics Data System (ADS)
Liu, Bin; Yuan, Yuan; Li, Sai; Shuai, Yong; Tan, He-Ping
2015-11-01
As microlens technology matures, studies of structural design and reconstruction algorithm optimization for light-field cameras are increasing. However, few of these studies address numerical physical simulation of the camera, and it is difficult to track lighting technology for forward simulations because of its low efficiency. In this paper, we develop a Monte Carlo method (MCM) based on ray splitting and build a physical model of a light-field camera with a microlens array to simulate its imaging and refocusing processes. The model enables simulation of different imaging modalities, and will be useful for camera structural design and error analysis system construction.
A Bayesian Monte Carlo Markov Chain Method for the Analysis of GPS Position Time Series
NASA Astrophysics Data System (ADS)
Olivares, German; Teferle, Norman
2013-04-01
Position time series from continuous GPS are an essential tool in many areas of the geosciences and are, for example, used to quantify long-term movements due to processes such as plate tectonics or glacial isostatic adjustment. It is now widely established that the stochastic properties of the time series do not follow a random behavior and this affects parameter estimates and associated uncertainties. Consequently, a comprehensive knowledge of the stochastic character of the position time series is crucial in order to obtain realistic error bounds and for this a number of methods have already been applied successfully. We present a new Bayesian Monte Carlo Markov Chain (MCMC) method to simultaneously estimate the model and the stochastic parameters of the noise in GPS position time series. This method provides a sample of the likelihood function and thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. One advantage of the MCMC method is that the computational time increases linearly with the number of parameters, hence being very suitable for dealing with a high number of parameters. A second advantage is that the properties of the estimator used in this method do not depend on the stationarity of the time series. At least on a theoretical level, no other estimator has been shown to have this feature. Furthermore, the MCMC method provides a means to detect multi-modality of the parameter estimates. We present an evaluation of the new MCMC method through comparison with widely used optimization and empirical methods for the analysis of GPS position time series.
Dynamic Load Balancing for Petascale Quantum Monte Carlo Applications: The Alias Method
Sudheer, C. D.; Krishnan, S.; Srinivasan, Ashok; Kent, Paul R
2013-01-01
Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
NASA Astrophysics Data System (ADS)
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is a highly accurate Quantum Monte Carlo method for electronic structure calculations of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency on parallel machines. This step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm: a simple renumbering of the MPI ranks based on proximity and a space filling curve significantly improves the MPI Allgather performance. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL show up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that require load balancing.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Kim, S.
2011-10-01
Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC) methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC) methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP), is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF) and the sequential importance resampling (SIR) particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.
The applicability of certain Monte Carlo methods to the analysis of interacting polymers
Krapp, D.M. Jr.
1998-05-01
The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.
Simulating rotationally inelastic collisions using a Direct Simulation Monte Carlo method
Schullian, O; Vaeck, N; van der Avoird, A; Heazlewood, B R; Rennick, C J; Softley, T P
2015-01-01
A new approach to simulating rotational cooling using a direct simulation Monte Carlo (DSMC) method is described and applied to the rotational cooling of ammonia seeded into a helium supersonic jet. The method makes use of ab initio rotational state changing cross sections calculated as a function of collision energy. Each particle in the DSMC simulations is labelled with a vector of rotational populations that evolves with time. Transfer of energy into translation is calculated from the mean energy transfer for this population at the specified collision energy. The simulations are compared with a continuum model for the on-axis density, temperature and velocity; rotational temperature as a function of distance from the nozzle is in accord with expectations from experimental measurements. The method could be applied to other types of gas mixture dynamics under non-uniform conditions, such as buffer gas cooling of NH$_3$ by He.
An asymptotic preserving Monte Carlo method for the multispecies Boltzmann equation
NASA Astrophysics Data System (ADS)
Zhang, Bin; Liu, Hong; Jin, Shi
2016-01-01
An asymptotic preserving (AP) scheme is efficient in solving multiscale kinetic equations with a wide range of the Knudsen number. In this paper, we generalize the asymptotic preserving Monte Carlo method (AP-DSMC) developed in [25] to the multispecies Boltzmann equation. This method is based on the successive penalty method [26] originated from the BGK-penalization-based AP scheme developed in [7]. For the multispecies Boltzmann equation, the penalizing Maxwellian should use the unified Maxwellian as suggested in [12]. We give the details of AP-DSMC for multispecies Boltzmann equation, show its AP property, and verify through several numerical examples that the scheme can allow time step much larger than the mean free time, thus making it much more efficient for flows with possibly small Knudsen numbers than the classical DSMC.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Microsopic nuclear level densities by the shell model Monte Carlo method
Y. Alhassid; G. F. Bertsch; C. N. Gilbreth; H. Nakada; C. Özen
2016-01-01
The configuration-interaction shell model approach provides an attractive framework for the calculation of nuclear level densities in the presence of correlations, but the large dimensionality of the model space has hindered its application in mid-mass and heavy nuclei. The shell model Monte Carlo (SMMC) method permits calculations in model spaces that are many orders of magnitude larger than spaces that can be treated by conventional diagonalization methods. We discuss recent progress in the SMMC approach to level densities, and in particular the calculation of level densities in heavy nuclei. We calculate the distribution of the axial quadrupole operator in the laboratory frame at finite temperature and demonstrate that it is a model-independent signature of deformation in the rotational invariant framework of the shell model. We propose a method to use these distributions for calculating level densities as a function of intrinsic deformation.
Uncertainty analysis using Monte Carlo method in the measurement of phase by ESPI
Anguiano Morales, Marcelino; Martinez, Amalia; Rayas, J. A.; Cordero, Raul R.
2008-04-15
A method for simultaneously measuring whole field in-plane displacements by using optical fiber and based on the dual-beam illumination principle electronic speckle pattern interferometry (ESPI) is presented in this paper. A set of single mode optical fibers and beamsplitter are employed to split the laser beam into four beams of equal intensity.One pair of fibers is utilized to illuminate the sample in the horizontal plane so it is sensitive only to horizontal in-plane displacement. Another pair of optical fibers is set to be sensitive only to vertical in-plane displacement. Each pair of optical fibers differs in longitude to avoid unwanted interference. By means of a Fourier-transform method of fringe-pattern analysis (Takeda method), we can obtain the quantitative data of whole field displacements. We found the uncertainty associated with the phases by mean of Monte Carlo-based technique.
Applications of a Monte Carlo whole-core microscopic depletion method
Hutton, J.L.; Butement, A.W.; Watt, S.; Shadbolt, R.D.
1995-12-31
In the WIMS-6 (Ref. 1) reactor physics program scheme a three-dimensional microscopic depletion method has been developed using Monte Carlo fluxes. Together with microscopic cross sections, these give nuclide reaction rates, which are used to solve nuclide depletion equations for each region. An extension of the method, enabling rapid whole-core calculations, has been implemented in the long-established companion code MONK5W. Predictions at successive depletion time steps are based on a calculational route where both geometry and cross sections are accurately represented, providing a reliable and independent approach for benchmarking other methods. Newly developed tracking and storage procedures in MONK5W enable whole core burnup modeling on a desktop computer. Theory and applications are presented in this paper.
Efficient Markov chain Monte Carlo methods for decoding neural spike trains.
Ahmadian, Yashar; Pillow, Jonathan W; Paninski, Liam
2011-01-01
Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is log concave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly nongaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the "hit-and-run" algorithm performed better than other MCMC methods. Using these algorithms, we show that for this latter class of priors, the posterior mean estimate can have a considerably lower average error than MAP, whereas for gaussian priors, the two estimators have roughly equal efficiency. We also address the application of MCMC methods for extracting nonmarginal properties of the posterior distribution. For example, by using MCMC to calculate the mutual information between the stimulus and response, we verify the validity of a computationally efficient Laplace approximation to this quantity for gaussian priors in a wide range of model parameters; this makes direct model-based computation of the mutual information tractable even in the case of large observed neural populations, where methods based on binning the spike train fail. Finally, we consider the effect of uncertainty in the GLM parameters on the posterior estimators. PMID:20964539
Investigation of a New Monte Carlo Method for the Transitional Gas Flow
Luo, X.; Day, Chr.
2011-05-20
The Direct Simulation Monte Carlo method (DSMC) is well developed for rarefied gas flow in transition flow regime when 0.01
Adapting phase-switch Monte Carlo method for flexible organic molecules
NASA Astrophysics Data System (ADS)
Bridgwater, Sally; Quigley, David
2014-03-01
The role of cholesterol in lipid bilayers has been widely studied via molecular simulation, however, there has been relatively little work on crystalline cholesterol in biological environments. Recent work has linked the crystallisation of cholesterol in the body with heart attacks and strokes. Any attempt to model this process will require new models and advanced sampling methods to capture and quantify the subtle polymorphism of solid cholesterol, in which two crystalline phases are separated by a phase transition close to body temperature. To this end, we have adapted phase-switch Monte Carlo for use with flexible molecules, to calculate the free energy between crystal polymorphs to a high degree of accuracy. The method samples an order parameter ?, which divides a displacement space for the N molecules, into regions energetically favourable for each polymorph; which is traversed using biased Monte Carlo. Results for a simple model of butane will be presented, demonstrating that conformational flexibility can be correctly incorporated within a phase-switching scheme. Extension to a coarse grained model of cholesterol and the resulting free energies will be discussed.
Yuki Norizoe; Toshihiro Kawakatsu
2011-06-08
Metastable structures in macromolecular and colloidal systems are non-equilibrium states that often have long lifetimes and cause difficulties in simulating equilibrium. In order to escape from the long-lived metastable states, we propose a newly devised method, molecular Monte-Carlo simulation of systems connected to 3 reservoirs: chemical potential $\\mu$, pressure $P$, and temperature $T$. One of these reservoirs is adjusted for the thermodynamic equilibrium condition according to Gibbs-Duhem equation, so that this adjusted 3rd reservoir does not thermodynamically affect phases and states. Additional degrees of freedom, i.e. system volume $V$ and the number of particles $N$, reduce kinetic barriers of non-equilibrium states and facilitate quick equilibration. We show globally-anisotropic defect-free ordered structures, e.g. string-like colloidal assembly, are obtained via our method.
Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Boyd, Iain D.
1991-01-01
A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.
The Linked Neighbour List (LNL) method for fast off-lattice Monte Carlo simulations of fluids
NASA Astrophysics Data System (ADS)
Mazzeo, M. D.; Ricci, M.; Zannoni, C.
2010-03-01
We present a new algorithm, called linked neighbour list (LNL), useful to substantially speed up off-lattice Monte Carlo simulations of fluids by avoiding the computation of the molecular energy before every attempted move. We introduce a few variants of the LNL method targeted to minimise memory footprint or augment memory coherence and cache utilisation. Additionally, we present a few algorithms which drastically accelerate neighbour finding. We test our methods on the simulation of a dense off-lattice Gay-Berne fluid subjected to periodic boundary conditions observing a speedup factor of about 2.5 with respect to a well-coded implementation based on a conventional link-cell. We provide several implementation details of the different key data structures and algorithms used in this work.
Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings
Sadeghi, K.; Gauthier, J.L.; Field, G.D.; Greschner, M.; Agne, M.; Chichilnisky, E.J.; Paninski, L.
2013-01-01
It has recently become possible to identify cone photoreceptors in primate retina from multi-electrode recordings of ganglion cell spiking driven by visual stimuli of sufficiently high spatial resolution. In this paper we present a statistical approach to the problem of identifying the number, locations, and color types of the cones observed in this type of experiment. We develop an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using a Linear-Nonlinear-Poisson (LNP) encoding model of ganglion cell spiking output, while analytically integrating out the functional weights between cones and ganglion cells. This method provides information about our posterior certainty about the inferred cone properties, and additionally leads to improvements in both the speed and quality of the inferred cone maps, compared to earlier “greedy” computational approaches. PMID:23194406
Calculations of alloy phases with a direct Monte-Carlo method
Faulkner, J.S.; Wang, Yang; Horvath, E.A.; Stocks, G.M.
1994-09-01
A method for calculating the boundaries that describe solid-solid phase transformations in the phase diagrams of alloys is described. The method is first-principles in the sense that the only input is the atomic numbers of the constituents. It proceeds from the observation that the crux of the Monte-Carlo method for obtaining the equilibrium distribution of atoms in an alloy is a calculation of the energy required to replace an A atom on site i with a B atom when the configuration of the atoms on the neighboring sites, {kappa}, is specified, {delta}H{sub {kappa}}(A{yields}B) = E{sub B}{kappa} -E{sub A}{kappa}. Normally, this energy difference is obtained by introducing interatomic potentials, v{sub ij}, into an Ising Hamiltonian, but the authors calculate it using the embedded cluster method (ECM). In the ECM an A or B atom is placed at the center of a cluster of atoms with the specified configuration K, and the atoms on all the other sites in the alloy are simulated by the effective scattering matrix obtained from the coherent potential approximation. The interchange energy is calculated directly from the electronic structure of the cluster. The table of {delta}H{sub {kappa}}(A{yields}B)`s for all configurations K and several alloy concentrations is used in a Monte Carlo calculation that predicts the phase of the alloy at any temperature and concentration. The detailed shape of the miscibility gaps in the palladium-rhodium and copper-nickel alloy systems are shown.
Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling
Peplow, Douglas E.; Miller, Thomas Martin; Patton, Bruce W; Wagner, John C
2013-01-01
The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and the SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.
Electronic structure of transition metal and f-electron oxides by quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Mitas, L.; Hu, S.; Kolorenc, J.
2012-12-01
We report on many-body quantum Monte Carlo (QMC) calculations of electronic structure of systems with strong correlation effects. These methods have been applied to ambient and high pressure transition metal oxides and, very recently, to selected f-electron oxides such as mineral thorianite (ThO2). QMC methods enabled us to calculate equilibrium characteristics such as cohesion, equilibrium lattice constants, bulk moduli, and electronic gaps with an excellent agreement with experiment without any non-variational parameters. In addition, for selected cases, the equations of state were calculated as well. The calculations were carried out using the state-of-the-art twist-averaged sampling of the Brilloiun zone, small-core Dirac-Fock pseudopotentials and one-particle orbitals from hybrid DFT functionals with varying weight of the exact exchange. This enabled us to build high-accuracy Slater-Jastrow explicitly correlated wavefunctions. In particular, we have employed optimization of the weight of the exact exchange in B3LYP and PBE0 functionals to minimize the fixed-node error in the diffusion Monte Carlo calculations. Instead of empirical fitting, we therefore use variational and explicitly many-body QMC method to find the value of the optimal weight, which falls between 15 and 30%. This finding is further supported also by recent calculations of transition metal-organic systems such as transition metal-porphyrins and others, showing thus a very wide range of its applicability. The calculations of ThO_2 appears to follow the same pattern and enabled to reproduce very well the experimental cohesion and very large electronic gap. In addition, we have made an important progress also in explicit treatment of the spin-orbit interaction which has been so far neglected in QMC calculations. Our studies illustrate the remarkable capabilities of QMC methods for strongly correlated solid systems.
Gong, Xingchu; Li, Yao; Chen, Huali; Qu, Haibin
2015-01-01
A design space approach was applied to optimize the extraction process of Danhong injection. Dry matter yield and the yields of five active ingredients were selected as process critical quality attributes (CQAs). Extraction number, extraction time, and the mass ratio of water and material (W/M ratio) were selected as critical process parameters (CPPs). Quadratic models between CPPs and CQAs were developed with determination coefficients higher than 0.94. Active ingredient yields and dry matter yield increased as the extraction number increased. Monte-Carlo simulation with models established using a stepwise regression method was applied to calculate the probability-based design space. Step length showed little effect on the calculation results. Higher simulation number led to results with lower dispersion. Data generated in a Monte Carlo simulation following a normal distribution led to a design space with a smaller size. An optimized calculation condition was obtained with 10000 simulation times, 0.01 calculation step length, a significance level value of 0.35 for adding or removing terms in a stepwise regression, and a normal distribution for data generation. The design space with a probability higher than 0.95 to attain the CQA criteria was calculated and verified successfully. Normal operating ranges of 8.2-10 g/g of W/M ratio, 1.25-1.63 h of extraction time, and two extractions were recommended. The optimized calculation conditions can conveniently be used in design space development for other pharmaceutical processes. PMID:26020778
Gong, Xingchu; Li, Yao; Chen, Huali; Qu, Haibin
2015-01-01
A design space approach was applied to optimize the extraction process of Danhong injection. Dry matter yield and the yields of five active ingredients were selected as process critical quality attributes (CQAs). Extraction number, extraction time, and the mass ratio of water and material (W/M ratio) were selected as critical process parameters (CPPs). Quadratic models between CPPs and CQAs were developed with determination coefficients higher than 0.94. Active ingredient yields and dry matter yield increased as the extraction number increased. Monte-Carlo simulation with models established using a stepwise regression method was applied to calculate the probability-based design space. Step length showed little effect on the calculation results. Higher simulation number led to results with lower dispersion. Data generated in a Monte Carlo simulation following a normal distribution led to a design space with a smaller size. An optimized calculation condition was obtained with 10,000 simulation times, 0.01 calculation step length, a significance level value of 0.35 for adding or removing terms in a stepwise regression, and a normal distribution for data generation. The design space with a probability higher than 0.95 to attain the CQA criteria was calculated and verified successfully. Normal operating ranges of 8.2-10 g/g of W/M ratio, 1.25-1.63 h of extraction time, and two extractions were recommended. The optimized calculation conditions can conveniently be used in design space development for other pharmaceutical processes. PMID:26020778
Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.
2012-07-01
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)
Nonequilibrium hypersonic flows simulations with asymptotic-preserving Monte Carlo methods
NASA Astrophysics Data System (ADS)
Ren, Wei; Liu, Hong; Jin, Shi
2014-12-01
In the rarefied gas dynamics, the DSMC method is one of the most popular numerical tools. It performs satisfactorily in simulating hypersonic flows surrounding re-entry vehicles and micro-/nano- flows. However, the computational cost is expensive, especially when Kn ? 0. Even for flows in the near-continuum regime, pure DSMC simulations require a number of computational efforts for most cases. Albeit several DSMC/NS hybrid methods are proposed to deal with this, those methods still suffer from the boundary treatment, which may cause nonphysical solutions. Filbet and Jin [1] proposed a framework of new numerical methods of Boltzmann equation, called asymptotic preserving schemes, whose computational costs are affordable as Kn ? 0. Recently, Ren et al. [2] realized the AP schemes with Monte Carlo methods (AP-DSMC), which have better performance than counterpart methods. In this paper, AP-DSMC is applied in simulating nonequilibrium hypersonic flows. Several numerical results are computed and analyzed to study the efficiency and capability of capturing complicated flow characteristics.
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
A modular method to handle multiple time-dependent quantities in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Shin, J.; Perl, J.; Schümann, J.; Paganetti, H.; Faddegon, B. A.
2012-06-01
A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method.
Forwards and Backwards Modelling of Ashfall Hazards in New Zealand by Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Hurst, T.; Smith, W. D.; Bibby, H. M.
2003-12-01
We have developed a technique for quantifying the probability of particular thicknesses of airfall ash from a volcanic eruption at any given site, using Monte Carlo methods, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo formulation then allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. We show how this method can handle the effects of multiple volcanic sources by aggregation, each source with its own characteristics. This follows a similar procedure which we have used for earthquake hazard assessment. The result is estimates of the frequency with which any given depth of ash is likely to be deposited at the selected site, accounting for all volcanoes that might affect it. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from ash cores in Auckland can give useful bounds for the likely total volumes erupted from the volcano Mt Egmont/Mt Taranaki, 280 km away, during the last 140,000 years, information difficult to obtain from local tephra stratigraphy.
Asselineau, Charles-Alexis; Zapata, Jose; Pye, John
2015-06-01
A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost. PMID:26072868
NASA Astrophysics Data System (ADS)
Takoudis, G.; Xanthos, S.; Clouvas, A.; Potiriadis, C.
2010-02-01
Portal monitoring radiation detectors are commonly used by steel industries in the probing and detection of radioactivity contamination in scrap metal. These portal monitors typically consist of polystyrene or polyvinyltoluene (PVT) plastic scintillating detectors, one or more photomultiplier tubes (PMT), an electronic circuit, a controller that handles data output and manipulation linking the system to a display or a computer with appropriate software and usually, a light guide. Such a portal used by the steel industry was opened and all principal materials were simulated using a Monte Carlo simulation tool (MCNP4C2). Various source-detector configurations were simulated and validated by comparison with corresponding measurements. Subsequently an experiment with a uniform cargo along with two sets of experiments with different scrap loads and radioactive sources ( 137Cs, 152Eu) were performed and simulated. Simulated and measured results suggested that the nature of scrap is crucial when simulating scrap load-detector experiments. Using the same simulating configuration, a series of runs were performed in order to estimate minimum alarm activities for 137Cs, 60Co and 192Ir sources for various simulated scrap densities. The minimum alarm activities as well as the positions in which they were recorded are presented and discussed.
NASA Astrophysics Data System (ADS)
Yoo, Hongki; Kang, Dong-Kyun; Lee, SeungWoo; Lee, Junhee; Gweon, Dae-Gab
2004-07-01
The errors can cause the serious loss of the performance of a precision machine system. In this paper, we propose the method of allocating the alignment tolerances of the components and apply this method to Confocal Scanning Microscopy (CSM) to get the optimal tolerances. CSM uses confocal aperture, which blocks the out-of-focus information. Thus, it provides images with superior resolution and has unique property of optical sectioning. Recently, due to these properties, it has been widely used for measurement in biological field, medical science, material science and semiconductor industry. In general, tight tolerances are required to maintain the performance of a system, but a high cost of manufacturing and assembling is required to preserve the tight tolerances. The purpose of allocating the optimal tolerances is minimizing the cost while keeping the performance of the system. In the optimal problem, we set the performance requirements as constraints and maximized the tolerances. The Monte Carlo Method, a statistical simulation method, is used in tolerance analysis. Alignment tolerances of optical components of the confocal scanning microscopy are optimized, to minimize the cost and to maintain the observation performance of the microscopy. We can also apply this method to the other precision machine system.
A First-Passage Kinetic Monte Carlo method for reaction–drift–diffusion processes
Mauro, Ava J.; Sigurdsson, Jon Karl; Shrake, Justin; Atzberger, Paul J.; Isaacson, Samuel A.
2014-02-15
Stochastic reaction–diffusion models are now a popular tool for studying physical systems in which both the explicit diffusion of molecules and noise in the chemical reaction process play important roles. The Smoluchowski diffusion-limited reaction model (SDLR) is one of several that have been used to study biological systems. Exact realizations of the underlying stochastic processes described by the SDLR model can be generated by the recently proposed First-Passage Kinetic Monte Carlo (FPKMC) method. This exactness relies on sampling analytical solutions to one and two-body diffusion equations in simplified protective domains. In this work we extend the FPKMC to allow for drift arising from fixed, background potentials. As the corresponding Fokker–Planck equations that describe the motion of each molecule can no longer be solved analytically, we develop a hybrid method that discretizes the protective domains. The discretization is chosen so that the drift–diffusion of each molecule within its protective domain is approximated by a continuous-time random walk on a lattice. New lattices are defined dynamically as the protective domains are updated, hence we will refer to our method as Dynamic Lattice FPKMC or DL-FPKMC. We focus primarily on the one-dimensional case in this manuscript, and demonstrate the numerical convergence and accuracy of our method in this case for both smooth and discontinuous potentials. We also present applications of our method, which illustrate the impact of drift on reaction kinetics.
Importance Sampling and Adjoint Hybrid Methods in Monte Carlo Transport with Reflecting Boundaries
Guillaume Bal; Ian Langmore
2011-04-13
Adjoint methods form a class of importance sampling methods that are used to accelerate Monte Carlo (MC) simulations of transport equations. Ideally, adjoint methods allow for zero-variance MC estimators provided that the solution to an adjoint transport equation is known. Hybrid methods aim at (i) approximately solving the adjoint transport equation with a deterministic method; and (ii) use the solution to construct an unbiased MC sampling algorithm with low variance. The problem with this approach is that both steps can be prohibitively expensive. In this paper, we simplify steps (i) and (ii) by calculating only parts of the adjoint solution. More specifically, in a geometry with limited volume scattering and complicated reflection at the boundary, we consider the situation where the adjoint solution "neglects" volume scattering, whereby significantly reducing the degrees of freedom in steps (i) and (ii). A main application for such a geometry is in remote sensing of the environment using physics-based signal models. Volume scattering is then incorporated using an analog sampling algorithm (or more precisely a simple modification of analog sampling called a heuristic sampling algorithm) in order to obtain unbiased estimators. In geometries with weak volume scattering (with a domain of interest of size comparable to the transport mean free path), we demonstrate numerically significant variance reductions and speed-ups (figures of merit).
A Monte Carlo Method for Modeling Thermal Damping: Beyond the Brownian-Motion Master Equation
Kurt Jacobs
2009-01-06
The "standard" Brownian motion master equation, used to describe thermal damping, is not completely positive, and does not admit a Monte Carlo method, important in numerical simulations. To eliminate both these problems one must add a term that generates additional position diffusion. He we show that one can obtain a completely positive simple quantum Brownian motion, efficiently solvable, without any extra diffusion. This is achieved by using a stochastic Schroedinger equation (SSE), closely analogous to Langevin's equation, that has no equivalent Markovian master equation. Considering a specific example, we show that this SSE is sensitive to nonlinearities in situations in which the master equation is not, and may therefore be a better model of damping for nonlinear systems.
Study of spatial resolution in a single GEM simulated by Monte-Carlo method
Yang Lan-Lan; Tu Yan; MA Shan-Le; Zhang Pan-Pan
2013-06-03
Spatial resolution is a significant factor in the GEM performance in view of X-rays radiography and UV, visible light imaging. Monte-Carlo method is used to investigate the spatial resolution determined by the transverse diffusion in the device. The simulation results indicate that the electrical parameters, such as the GEM voltages and the electric field at the drift and induction regions, only have minor effects on the spatial resolution. The geometrical parameters and the working gases chosen, on the other hand, are the main parameters that determine the spatial resolution. The spatial resolution is determined more on the drift and diffusion processes than on the avalanche process. Especially for the different working gases, the square root function of the ratio of the electron diffusion coefficient and the mobility has a significant effect on the spatial resolution.
Investigation of a V{sub 15} magnetic molecular nanocluster by the Monte Carlo method
Khizriev, K. Sh.; Dzhamalutdinova, I. S.; Taaev, T. A.
2013-06-15
Exchange interactions in a V{sub 15} magnetic molecular nanocluster are considered, and the process of magnetization reversal for various values of the set of exchange constants is analyzed by the Monte Carlo method. It is shown that the best agreement between the field dependence of susceptibility and experimental results is observed for the following set of exchange interaction constants in a V{sub 15} magnetic molecular nanocluster: J = 500 K, J Prime = 150 K, J Double-Prime = 225 K, J{sub 1} = 50 K, and J{sub 2} = 50 K. It is observed for the first time that, in a strong magnetic field, for each of the three transitions from low-spin to high-spin states, the heat capacity exhibits two closely spaced maxima.
An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks
NASA Technical Reports Server (NTRS)
Kim, Stacy
2011-01-01
Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.
An analysis of the convergence of the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Galitzine, Cyril; Boyd, Iain D.
2015-05-01
In this article, a rigorous framework for the analysis of the convergence of the direct simulation Monte Carlo (DSMC) method is presented. It is applied to the simulation of two test cases: an axisymmetric jet at a Knudsen number of 0.01 and Mach number of 1 and a two-dimensional cylinder flow at a Knudsen of 0.05 and Mach 10. The rate of convergence of sampled quantities is found to be well predicted by an extended form of the Central Limit Theorem that takes into account the correlation of samples but requires the calculation of correlation spectra. A simplified analytical model that does not require correlation spectra is then constructed to model the effect of sample correlation. It is then used to obtain an a priori estimate of the convergence error.
Markov Chain Monte Carlo Sampling Methods for 1D Seismic and EM Data Inversion
Energy Science and Technology Software Center (ESTSC)
2008-09-22
This software provides several Markov chain Monte Carlo sampling methods for the Bayesian model developed for inverting 1D marine seismic and controlled source electromagnetic (CSEM) data. The current software can be used for individual inversion of seismic AVO and CSEM data and for joint inversion of both seismic and EM data sets. The structure of the software is very general and flexible, and it allows users to incorporate their own forward simulation codes and rockmore »physics model codes easily into this software. Although the softwae was developed using C and C++ computer languages, the user-supplied codes can be written in C, C++, or various versions of Fortran languages. The software provides clear interfaces for users to plug in their own codes. The output of this software is in the format that the R free software CODA can directly read to build MCMC objects.« less
A Monte Carlo Method for Projecting Uncertainty in 2D Lagrangian Trajectories
NASA Astrophysics Data System (ADS)
Robel, A.; Lozier, S.; Gary, S. F.
2009-12-01
In this study, a novel method is proposed for modeling the propagation of uncertainty due to subgrid-scale processes through a Lagrangian trajectory advected by ocean surface velocities. The primary motivation and application is differentiating between active and passive trajectories for sea turtles as observed through satellite telemetry. A spatiotemporal launch box is centered on the time and place of actual launch and populated with launch points. Synthetic drifters are launched at each of these locations, adding, at each time step along the trajectory, Monte Carlo perturbations in velocity scaled to the natural variability of the velocity field. The resulting trajectory cloud provides a dynamically evolving density field of synthetic drifter locations that represent the projection of subgrid-scale uncertainty out in time. Subsequently, by relaunching synthetic drifters at points along the trajectory, plots are generated in a daisy chain configuration of the “most likely passive pathways” for the drifter.
The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations
Sellier, J.M. Dimov, I.
2014-09-15
The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practically unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.
Thermal studies of a superconducting current limiter using Monte-Carlo method
NASA Astrophysics Data System (ADS)
Lévêque, J.; Rezzoug, A.
1999-07-01
Considering the increase of the fault current level in electrical network, the current limiters become very interesting. The superconducting limiters are based on the quasi-instantaneous intrinsic transition from superconducting state to normal resistive one. Without detection of default or given order, they reduce the constraints supported by electrical installations above the fault. To avoid the destruction of the superconducting coil, the temperature must not exceed a certain value. Therefore the design of a superconducting coil needs the simultaneous resolution of an electrical equation and a thermal one. This papers deals with a resolution of this coupled problem by the method of Monte-Carlo. This method allows us to calculate the evolution of the resistance of the coil as well as the current of limitation. Experimental results are compared with theoretical ones. L'augmentation des courants de défaut dans les grands réseaux électriques ravive l'intérêt pour les limiteurs de courant. Les limiteurs supraconducteurs de courants peuvent limiter quasi-instantanément, sans donneur d'ordre ni détection de défaut, les courants de court-circuit réduisant ainsi les contraintes supportées par les installations électriques situées en amont du défaut. La limitation s'accompagne nécessairement de la transition du supraconducteur par dépassement de son courant critique. Pour éviter la destruction de la bobine supraconductrice la température ne doit pas excéder une certaine valeur. La conception d'une bobine supraconductrice exige donc la résolution simultanée d'une équation électrique et d'une équation thermique. Nous présentons une résolution de ce problème electrothermique par la méthode de Monte-Carlo. Cette méthode nous permet de calculer l'évolution de la résistance de la bobine et du courant de limitation. Des résultats expérimentaux sont comparés avec les résultats théoriques.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
Adatom Density Kinetic Monte Carlo (AD-KMC): A new method for fast growth simulation
NASA Astrophysics Data System (ADS)
Mandreoli, Lorenzo; Neugebauer, Joerg
2002-03-01
The main approach to perform growth simulations on an atomistic level is kinetic Monte Carlo (KMC). A problem with this method is that the CPU time increases exponentially with the growth temperature, making simulations exceedingly expensive. An analysis of typical KMC runs showed two characteristic time scales: t_ad which is the characteristic time for an adatom jump and t_surf which is the characteristic time before the surface morphology changes. We have developed a new method, called adatom density KMC (AD-KMC), which eliminates the fast time scale t_ad. This is achieved by directly calculating adatom ditribution rather than to follow explicitly the trajectory of each adatom like in KMC. Statistical checks were done on AD-KMC to test the method. The density of islands and the island-size distribution as function of temperature and flux showed an excellent agreement with KMC results and nucleation theory. Finally, we apply the method to study complex systems such as self-organization in V-grooves and lateral overgrowth.
Low-Density Nozzle Flow by the Direct Simulation Monte Carlo and Continuum Methods
NASA Technical Reports Server (NTRS)
Chung, Chang-Hong; Kim, Sku C.; Stubbs, Robert M.; Dewitt, Kenneth J.
1994-01-01
Two different approaches, the direct simulation Monte Carlo (DSMC) method based on molecular gasdynamics, and a finite-volume approximation of the Navier-Stokes equations, which are based on continuum gasdynamics, are employed in the analysis of a low-density gas flow in a small converging-diverging nozzle. The fluid experiences various kinds of flow regimes including continuum, slip, transition, and free-molecular. Results from the two numerical methods are compared with Rothe's experimental data, in which density and rotational temperature variations along the centerline and at various locations inside a low-density nozzle were measured by the electron-beam fluorescence technique. The continuum approach showed good agreement with the experimental data as far as density is concerned. The results from the DSMC method showed good agreement with the experimental data, both in the density and the rotational temperature. It is also shown that the simulation parameters, such as the gas/surface interaction model, the energy exchange model between rotational and translational modes, and the viscosity-temperature exponent, have substantial effects on the results of the DSMC method.
Molecular Monte Carlo simulation method of systems connected to three reservoirs
Yuki Norizoe; Toshihiro Kawakatsu
2011-10-20
In conventional molecular simulation, metastable structures often survive over considerable computational time, resulting in difficulties in simulating equilibrium states. In order to overcome this difficulty, here we propose a newly devised method, molecular Monte Carlo simulation of systems connected to three reservoirs: chemical potential, pressure, and temperature. Gibbs-Duhem equation thermodynamically limits the number of reservoirs to 2 for single component systems. However, in conventional simulations utilizing 2 or fewer reservoirs, the system tends to be trapped in metastable states. Even if the system is allowed to escape from such metastable states in conventional simulations, the fixed system size and/or the fixed number of particles result in creation of defects in ordered structures. This situation breaks global anisotropy of ordered structures and forces the periodicity of the structure to be commensurate to the system size. Here we connect the such three reservoirs to overcome these difficulties. A method of adjusting the three reservoirs and obtaining thermodynamically stable states is also designed, based on Gibbs-Duhem equation. Unlike the other conventional simulation techniques utilizing no more than 2 reservoirs, our method allows the system itself to simultaneously tune the system size and the number of particles to periodicity and anisotropy of ordered structures. Our method requires fewer efforts for preliminary simulations prior to production runs, compared with the other advanced simulation techniques such as multicanonical method. A free energy measurement method, suitable for the system with the three reservoirs, is also discussed, based on Euler equation of thermodynamics. This measurement method needs fewer computational efforts than other free energy measurement methods do.
Parallel domain decomposition methods in fluid models with Monte Carlo transport
Alme, H.J.; Rodrigues, G.H.; Zimmerman, G.B.
1996-12-01
To examine the domain decomposition code coupled Monte Carlo-finite element calculation, it is important to use a domain decomposition that is suitable for the individual models. We have developed a code that simulates a Monte Carlo calculation ( ) on a massively parallel processor. This code is used to examine the load balancing behavior of three domain decomposition ( ) for a Monte Carlo calculation. Results are presented.
NASA Astrophysics Data System (ADS)
Ellis-Monaghan, John Joseph
1995-01-01
In this thesis hot electron injection and interface -state generation in silicon MOSFET's are investigated theoretically. To accomplish this study, we developed an ensemble Monte Carlo simulator suitable for examining the high energy tail of the electron energy distribution. The model includes all relevant details for carrier transport, such as a realistic silicon band structure (two band pseudopotential), interactions with phonons, electrons (both local and non-local), ionized impurities, a SOR Poisson solver, and statistical enhancement calculations. This work accurately predicts the quantity and lateral distribution of hot electron transport induced interface states in a silicon MOSFET using a coupled Monte Carlo/interface-state generation model. The calculations explore the sensitivity of the electron energy distribution to impact ionization coefficients, self-consistent electron -electron calculations, and surface scattering. The modeled interface-state distribution agrees with charge pumping measurements and predicts that the interface state generation extends spatially beyond the range where charge pumping measurements have been published. The study continues to apply these techniques to 0.33-, 0.20-, and 0.12-?m channel -length devices scaled by constant field and more generalized methods. Applied bias and electric field dependency were investigated. Hot-electron injection and interface-state density profiles were simulated at biases as low as 1.44 V with channel lengths as low as 0.12-mu m. These simulations demonstrate reasons that "lucky electron" or electron temperature models are no longer accurate for predicting hot-electron effects in such regimes. Electron -electron scattering is shown to be a critical consideration for simulation of hot-electron injection in low drain to source bias voltages. As expected, simulations indicate the lateral electric field may be increased with each scaling generation for an equivalent hot-electron injection. It is also shown that conventional hot-electron stressing using accelerated bias stressing continues to be valuable for drain to source biases as low as 1.44 V.
Louis Leon Thurstone in Monte Carlo: creating error bars for the method of paired comparison
NASA Astrophysics Data System (ADS)
Montag, Ethan D.
2003-12-01
The method of paired comparison is often used in experiments where perceptual scale values for a collection of stimuli are desired, such as in experiments analyzing image quality. Thurstone's Case V of his Law of Comparative Judgments is often used as the basis for analyzing data produced in paired comparison experiments. However, methods for determining confidence intervals and critical distances for significant differences based on Thurstone's Law have been elusive leading some to abandon the simple analysis provided by Thurstone's formulation. In order to provide insight into this problem of determining error, Monte Carlo simulations of paired comparison experiments were performed based on the assumptions of uniformly normal, independent, and uncorrelated responses from stimulus pair presentations. The results from these multiple simulations show that the variation in the distribution of experimental results of paired comparison experiments can be well predicted as a function of stimulus number and the number of observations. Using these results, confidence intervals and critical values for comparisons can be made using traditional statistical methods. In addition the results from simulations can be used to analyze goodness-of-fit techniques.
NASA Technical Reports Server (NTRS)
Hueser, J. E.; Brock, F. J.; Melfi, L. T., Jr.; Bird, G. A.
1984-01-01
A new solution procedure has been developed to analyze the flowfield properties in the vicinity of the Inertial Upper Stage/Spacecraft during the 1st stage (SRMI) burn. Continuum methods are used to compute the nozzle flow and the exhaust plume flowfield as far as the boundary where the breakdown of translational equilibrium leaves these methods invalid. The Direct Simulation Monte Carlo (DSMC) method is applied everywhere beyond this breakdown boundary. The flowfield distributions of density, velocity, temperature, relative abundance, surface flux density, and pressure are discussed for each species for 2 sets of boundary conditions: vacuum and freestream. The interaction of the exhaust plume and the freestream with the spacecraft and the 2-stream direct interaction are discussed. The results show that the low density, high velocity, counter flowing free-stream substantially modifies the flowfield properties and the flux density incident on the spacecraft. A freestream bow shock is observed in the data, located forward of the high density region of the exhaust plume into which the freestream gas does not penetrate. The total flux density incident on the spacecraft, integrated over the SRM1 burn interval is estimated to be of the order of 10 to the 22nd per sq m (about 1000 atomic layers).
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Calculation of the Entropy of random coil polymers with the hypothetical scanning Monte Carlo Method
White, Ronald P.; Meirovitch, Hagai
2006-01-01
Hypothetical scanning Monte Carlo (HSMC) is a method for calculating the absolute entropy, S and free energy, F from a given MC trajectory developed recently and applied to liquid argon, TIP3P water and peptides. In this paper HSMC is extended to random coil polymers by applying it to self-avoiding walks on a square lattice – a simple but difficult model due to strong excluded volume interactions. With HSMC the probability of a given chain is obtained as a product of transition probabilities calculated for each bond by MC simulations and a counting formula. This probability is exact in the sense that it is based on all the interactions of the system and the only approximation is due to finite sampling. The method provides rigorous upper and lower bounds for F, which can be obtained from a very small sample and even from a single chain conformation. HSMC is independent of existing techniques and thus constitutes an independent research tool. The HSMC results are compared to those obtained by other methods, and its application to complex lattice chain models is discussed; we emphasize its ability to treat any type of boundary conditions for which a reference state (with known free energy) might be difficult to define for a thermodynamic integration process. Finally, we stress that the capability of HSMC to extract the absolute entropy from a given sample is important for studying relaxation processes, such as protein folding. PMID:16356071
Yamamoto, K.; Hashizume, K.; Wada, T.; Ohta, M.; Suda, T.; Nishimura, T.; Fujimoto, M. Y.; Kato, K.; Aikawa, M.
2006-07-12
We propose a Monte Carlo method to study the reaction paths in nucleosynthesis during stellar evolution. Determination of reaction paths is important to obtain the physical picture of stellar evolution. The combination of network calculation and our method gives us a better understanding of physical picture. We apply our method to the case of the helium shell flash model in the extremely metal poor star.
Favorite, J.A.
1999-09-01
In previous work, exponential convergence of Monte Carlo solutions using the reduced source method with Legendre expansion has been achieved only in one-dimensional rod and slab geometries. In this paper, the method is applied to three-dimensional (right parallelepiped) problems, with resulting evidence suggesting success. As implemented in this paper, the method approximates an angular integral of the flux with a discrete-ordinates numerical quadrature. It is possible that this approximation introduces an inconsistency that must be addressed.
Chung, Kiwhan
1996-01-01
While the use of Monte Carlo method has been prevalent in nuclear engineering, it has yet to fully blossom in the study of solute transport in porous media. By using an etched-glass micromodel, an attempt is made to apply ...
Bendele, Travis Henry
2013-02-22
A honeycomb probe was designed to measure the optical properties of biological tissues using single Monte Carlo method. The ongoing project is intended to be a multi-wavelength, real time, and in-vivo technique to detect breast cancer. Preliminary...
DOI: 10.1002/minf.201200069 CORAL: Monte Carlo Method as a Tool for the Prediction of
Gini, Giuseppina
DOI: 10.1002/minf.201200069 CORAL: Monte Carlo Method as a Tool for the Prediction of the representation of the molecular struc- ture is an important component of the QSPR/QSAR analy- ses. CORAL software generated by the CORAL software. However, there are various ap- proaches that could be applied
Da, B.; Sun, Y.; Ding, Z. J.; Mao, S. F.; Zhang, Z. M.; Jin, H.; Yoshikawa, H.; Tanuma, S.
2013-06-07
A reverse Monte Carlo (RMC) method is developed to obtain the energy loss function (ELF) and optical constants from a measured reflection electron energy-loss spectroscopy (REELS) spectrum by an iterative Monte Carlo (MC) simulation procedure. The method combines the simulated annealing method, i.e., a Markov chain Monte Carlo (MCMC) sampling of oscillator parameters, surface and bulk excitation weighting factors, and band gap energy, with a conventional MC simulation of electron interaction with solids, which acts as a single step of MCMC sampling in this RMC method. To examine the reliability of this method, we have verified that the output data of the dielectric function are essentially independent of the initial values of the trial parameters, which is a basic property of a MCMC method. The optical constants derived for SiO{sub 2} in the energy loss range of 8-90 eV are in good agreement with other available data, and relevant bulk ELFs are checked by oscillator strength-sum and perfect-screening-sum rules. Our results show that the dielectric function can be obtained by the RMC method even with a wide range of initial trial parameters. The RMC method is thus a general and effective method for determining the optical properties of solids from REELS measurements.
NASA Astrophysics Data System (ADS)
Zhang, Jun; Guo, Fan
2015-08-01
Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.
Adjoint-based deviational Monte Carlo methods for phonon transport calculations
NASA Astrophysics Data System (ADS)
Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.
2015-06-01
In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.
Cu-Au Alloys Using Monte Carlo Simulations and the BFS Method for Alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Good, Brian; Ferrante, John
1996-01-01
Semi empirical methods have shown considerable promise in aiding in the calculation of many properties of materials. Materials used in engineering applications have defects that occur for various reasons including processing. In this work we present the first application of the BFS method for alloys to describe some aspects of microstructure due to processing for the Cu-Au system (Cu-Au, CuAu3, and Cu3Au). We use finite temperature Monte Carlo calculations, in order to show the influence of 'heat treatment' in the low-temperature phase of the alloy. Although relatively simple, it has enough features that could be used as a first test of the reliability of the technique. The main questions to be answered in this work relate to the existence of low temperature ordered structures for specific concentrations, for example, the ability to distinguish between rather similar phases for equiatomic alloys (CuAu I and CuAu II, the latter characterized by an antiphase boundary separating two identical phases).
Application of Quantum Monte Carlo Methods to Describe the Properties of Manganese Oxide Polymorphs
NASA Astrophysics Data System (ADS)
Schiller, Joshua; Ertekin, Elif
2015-03-01
First-principles descriptions of the properties of correlated materials such as transition metal oxides has been a long-standing challenge. Manganese oxide is one such example: according to both conventional and hybrid functional density functional theory, the zinc blende polymorph is predicted to be lower in energy than the rock salt polymorph that occurs in nature. While the correct energy ordering can be obtained in density functional approaches by careful selection of modeling parameters, we present here an alternative approach based on quantum Monte Carlo methods, which are a suite of stochastic tools for solution of the many-body Schrodinger equation. Due to its direct treatment of electron correlation, the QMC method offers the possibility of parameter-free, high-accuracy, systematically improvable analysis. In manganese oxide, we find that the QMC methodology is able to accurately reproduce relative phase energies, lattice constants, and band gaps without the use of adjustable parameters. Additionally, statistical analysis of the many-body wave functions from QMC provides some diagnostic assessments to reveal the physics that may be missing from other modeling approaches.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
NASA Astrophysics Data System (ADS)
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-01
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
MC-Net: a method for the construction of phylogenetic networks based on the Monte-Carlo method
2010-01-01
Background A phylogenetic network is a generalization of phylogenetic trees that allows the representation of conflicting signals or alternative evolutionary histories in a single diagram. There are several methods for constructing these networks. Some of these methods are based on distances among taxa. In practice, the methods which are based on distance perform faster in comparison with other methods. The Neighbor-Net (N-Net) is a distance-based method. The N-Net produces a circular ordering from a distance matrix, then constructs a collection of weighted splits using circular ordering. The SplitsTree which is a program using these weighted splits makes a phylogenetic network. In general, finding an optimal circular ordering is an NP-hard problem. The N-Net is a heuristic algorithm to find the optimal circular ordering which is based on neighbor-joining algorithm. Results In this paper, we present a heuristic algorithm to find an optimal circular ordering based on the Monte-Carlo method, called MC-Net algorithm. In order to show that MC-Net performs better than N-Net, we apply both algorithms on different data sets. Then we draw phylogenetic networks corresponding to outputs of these algorithms using SplitsTree and compare the results. Conclusions We find that the circular ordering produced by the MC-Net is closer to optimal circular ordering than the N-Net. Furthermore, the networks corresponding to outputs of MC-Net made by SplitsTree are simpler than N-Net. PMID:20727135
Reduced Monte Carlo methods for the solution of stochastic groundwater flow problems
NASA Astrophysics Data System (ADS)
Pasetto, D.; Guadagnini, A.; Putti, M.
2012-04-01
Reduced order modeling is often employed to decrease the computational cost of numerical solutions of parametric Partial Differential Equations. Reduced basis, balanced truncation, projections methods are among the most studied techniques to achieve model reduction. We study the applicability of snapshot-based Proper Orthogonal Decomposition (POD) to Monte Carlo (MC) simulations applied to the solution of the stochastic groundwater flow problem. POD model reduction is obtained by projecting the model equations onto a space generated by a small number of basis functions (principal components). These are obtained upon exploring the solution (probability) space with snapshots, i.e., system states obtained by solving the original process-based equations. The reduced model is then employed to complete the ensemble by adding multiple realizations. We apply this technique to a two dimensional simulation of steady state saturated groundwater flow, and explore the sensitivity of the method to the number of snapshots and associated principal components in terms of accuracy and efficiency of the overall MC procedure. In our preliminary results, we distinguish the problem of heterogeneous recharge, in which the stochastic term is confined to the forcing function (additive stochasticity), from the case of heterogeneous hydraulic conductivity, in which the stochastic term is multiplicative. In the first scenario, the linearity of the problem is fully exploited and the POD approach yields accurate and efficient realizations, leading to substantial speed up of the MC method. The second scenario poses a significant challenge, as the adoption of a few snapshots based on the full model does not provide enough variability in the reduced order replicates, thus leading to poor convergence of the MC method. We find that increasing the number of snapshots improves the convergence of MC but only for large integral scales of the log-conductivity field. The technique is then extended to take full advantage of the solution of moment differential equations of groundwater flow.
Quantifying uncertainties in pollutant mapping studies using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Tan, Yi; Robinson, Allen L.; Presto, Albert A.
2014-12-01
Routine air monitoring provides accurate measurements of annual average concentrations of air pollutants, but the low density of monitoring sites limits its capability in capturing intra-urban variation. Pollutant mapping studies measure air pollutants at a large number of sites during short periods. However, their short duration can cause substantial uncertainty in reproducing annual mean concentrations. In order to quantify this uncertainty for existing sampling strategies and investigate methods to improve future studies, we conducted Monte Carlo experiments with nationwide monitoring data from the EPA Air Quality System. Typical fixed sampling designs have much larger uncertainties than previously assumed, and produce accurate estimates of annual average pollution concentrations approximately 80% of the time. Mobile sampling has difficulties in estimating long-term exposures for individual sites, but performs better for site groups. The accuracy and the precision of a given design decrease when data variation increases, indicating challenges in sites intermittently impact by local sources such as traffic. Correcting measurements with reference sites does not completely remove the uncertainty associated with short duration sampling. Using reference sites with the addition method can better account for temporal variations than the multiplication method. We propose feasible methods for future mapping studies to reduce uncertainties in estimating annual mean concentrations. Future fixed sampling studies should conduct two separate 1-week long sampling periods in all 4 seasons. Mobile sampling studies should estimate annual mean concentrations for exposure groups with five or more sites. Fixed and mobile sampling designs have comparable probabilities in ordering two sites, so they may have similar capabilities in predicting pollutant spatial variations. Simulated sampling designs have large uncertainties in reproducing seasonal and diurnal variations at individual sites, but are capable to predict these variations for exposure groups.
Constant-pH Hybrid Nonequilibrium Molecular Dynamics-Monte Carlo Simulation Method.
Chen, Yunjie; Roux, Benoît
2015-08-11
A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys. 2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD-MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD-MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems. PMID:26300709
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y.; Yun, S.
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
IR imaging simulation and analysis for aeroengine exhaust system based on reverse Monte Carlo method
NASA Astrophysics Data System (ADS)
Chen, Shiguo; Chen, Lihai; Mo, Dongla; Shi, Jingcheng
2014-11-01
The IR radiation characteristics of aeroengine are the important basis for IR stealth design and anti-stealth detection of aircraft. With the development of IR imaging sensor technology, the importance of aircraft IR stealth increases. An effort is presented to explore target IR radiation imaging simulation based on Reverse Monte Carlo Method (RMCM), which combined with the commercial CFD software. Flow and IR radiation characteristics of an aeroengine exhaust system are investigated, which developing a full size geometry model based on the actual parameters, using a flow-IR integration structured mesh, obtaining the engine performance parameters as the inlet boundary conditions of mixer section, and constructing a numerical simulation model of engine exhaust system of IR radiation characteristics based on RMCM. With the above models, IR radiation characteristics of aeroengine exhaust system is given, and focuses on the typical detecting band of IR spectral radiance imaging at azimuth 20°. The result shows that: (1) in small azimuth angle, the IR radiation is mainly from the center cone of all hot parts; near the azimuth 15°, mixer has the biggest radiation contribution, while center cone, turbine and flame stabilizer equivalent; (2) the main radiation components and space distribution in different spectrum is different, CO2 at 4.18, 4.33 and 4.45 micron absorption and emission obviously, H2O at 3.0 and 5.0 micron absorption and emission obviously.
Assessment of the Contrast to Noise Ratio in PET Scanners with Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess the contrast to noise ratio (CNR) of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The PET scanner simulated was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution. Image quality was assessed in terms of the CNR. CNR was estimated from coronal reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL. OSMAPOSL reconstruction was assessed by using various subsets (3, 15 and 21) and various iterations (2 to 20). CNR values were found to decrease when both iterations and subsets increase. Two (2) iterations were found to be optimal. The simulated PET evaluation method, based on the TLC plane source, can be useful in image quality assessment of PET scanners.
Development of a software package for solid-angle calculations using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Zhang, Jie; Chen, Xiulian; Zhang, Changsheng; Li, Gang; Xu, Jiayun; Sun, Guangai
2014-02-01
Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4.
Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics
Hall, Howard L
2012-01-01
Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.
Velazquez, L; Castro-Palacio, J C
2015-03-01
Velazquez and Curilef [J. Stat. Mech. (2010); J. Stat. Mech. (2010)] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989)]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L)?(1/L)z with exponent z?0.26±0.02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L?+?. PMID:25871247
Avrorin, E. N.; Tsvetokhin, A. G.; Xenofontov, A. I.; Kourbatova, E. I.; Regens, J. L.
2002-02-26
This paper presents the results of an ongoing research and development project conducted by Russian institutions in Moscow and Snezhinsk, supported by the International Science and Technology Center (ISTC), in collaboration with the University of Oklahoma. The joint study focuses on developing and applying analytical tools to effectively characterize contaminant transport and assess risks associated with migration of radionuclides and heavy metals in the water column and sediments of large reservoirs or lakes. The analysis focuses on the development and evaluation of theoretical-computational models that describe the distribution of radioactive wastewater within a reservoir and characterize the associated radiation field as well as estimate doses received from radiation exposure. The analysis focuses on the development and evaluation of Monte Carlo-based, theoretical-computational methods that are applied to increase the precision of results and to reduce computing time for estimating the characteristics the radiation field emitted from the contaminated wastewater layer. The calculated migration of radionuclides is used to estimate distributions of radiation doses that could be received by an exposed population based on exposure to radionuclides from specified volumes of discrete aqueous sources. The calculated dose distributions can be used to support near-term and long-term decisions about priorities for environmental remediation and stewardship.
NASA Astrophysics Data System (ADS)
Yamamoto, Alexandre Y.; Oliveira, Aurenice M.; Lima, Ivan T.
2014-05-01
The numerical accuracy of the results obtained using the multicanonical Monte Carlo (MMC) algorithm is strongly dependent on the choice of the step size, which is the range of the MMC perturbation from one sample to the next. The proper choice of the MMC step size leads to much faster statistical convergence of the algorithm for the calculation of rare events. One relevant application of this method is the calculation of the probability of the bins in the tail of the discretized probability density function of the differential group delay between the principal states of polarization due to polarization mode dispersion. We observed that the optimum MMC performance is strongly correlated with the inflection point of the actual transition rate from one bin to the next. We also observed that the optimum step size does not correspond to any specific value of the acceptance rate of the transitions in MMC. The results of this study can be applied to the improvement of the performance of MMC applied to the calculation of other rare events of interest in optical communications, such as the bit error ratio and pattern dependence in optical fiber systems with coherent receivers.
NASA Astrophysics Data System (ADS)
Agudelo-Giraldo, J. D.; Restrepo-Parra, E.; Restrepo, J.
2015-10-01
The Metropolis algorithm and the classical Heisenberg approximation were implemented by the Monte Carlo method to design a computational approach to the magnetization and resistivity of La2/3Ca1/3MnO3, which depends on the Mn ion vacancies as the external magnetic field increases. This compound is ferromagnetic, and it exhibits the colossal magnetoresistance (CMR) effect. The monolayer was built with L×L×d dimensions, and it had L=30 umc (units of magnetic cells) for its dimension in the x-y plane and was d=12 umc in thickness. The Hamiltonian that was used contains interactions between first neighbors, the magnetocrystalline anisotropy effect and the external applied magnetic field response. The system that was considered contains mixed-valence bonds: Mn3+eg'-O-Mn3+eg, Mn3+eg-O-Mn4+d3 and Mn3+eg'-O-Mn4+d3. The vacancies were placed randomly in the sample, replacing any type of Mn ion. The main result shows that without vacancies, the transitions TC (Curie temperature) and TMI (metal-insulator temperature) are similar, whereas with the increase in the vacancy percentage, TMI presented lower values than TC. This situation is caused by the competition between the external magnetic field, the vacancy percentage and the magnetocrystalline anisotropy, which favors the magnetoresistive effect at temperatures below TMI. Resistivity loops were also observed, which shows a direct correlation with the hysteresis loops of magnetization at temperatures below TC.
Kinetic Monte Carlo method for dislocation migration in the presence of solute
Deo, Chaitanya S.; Srolovitz, David J.; Cai Wei; Bulatov, Vasily V.
2005-01-01
We present a kinetic Monte Carlo method for simulating dislocation motion in alloys within the framework of the kink model. The model considers the glide of a dislocation in a static, three-dimensional solute atom atmosphere. It includes both a description of the short-range interaction between a dislocation core and the solute and long-range solute-dislocation interactions arising from the interplay of the solute misfit and the dislocation stress field. Double-kink nucleation rates are calculated using a first-passage-time analysis that accounts for the subcritical annihilation of embryonic double kinks as well as the presence of solutes. We explicitly consider the case of the motion of a <111>-oriented screw dislocation on a {l_brace}011{r_brace}-slip plane in body-centered-cubic Mo-based alloys. Simulations yield dislocation velocity as a function of stress, temperature, and solute concentration. The dislocation velocity results are shown to be consistent with existing experimental data and, in some cases, analytical models. Application of this model depends upon the validity of the kink model and the availability of fundamental properties (i.e., single-kink energy, Peierls stress, secondary Peierls barrier to kink migration, single-kink mobility, solute-kink interaction energies, solute misfit), which can be obtained from first-principles calculations and/or molecular-dynamics simulations.
Simulation of aggregating particles in complex flows by the lattice kinetic Monte Carlo method
NASA Astrophysics Data System (ADS)
Flamm, Matthew H.; Sinno, Talid; Diamond, Scott L.
2011-01-01
We develop and validate an efficient lattice kinetic Monte Carlo (LKMC) method for simulating particle aggregation in laminar flows with spatially varying shear rate, such as parabolic flow or flows with standing vortices. A contact time model was developed to describe the particle-particle collision efficiency as a function of the local shear rate, G, and approach angle, ?. This model effectively accounts for the hydrodynamic interactions between approaching particles, which is not explicitly considered in the LKMC framework. For imperfect collisions, the derived collision efficiency [\\varepsilon = 1 - int_0^{{? {? /2} {sin ? exp ( { - 2\\cot ? {{? _{agg} }/ { ? _{agg} } G} )} d?] was found to depend only on ?agg/G, where ?agg is the specified aggregation rate. For aggregating platelets in tube flow, ? _{agg} = 0.683 s-1 predicts the experimentally measured ? across a physiological range (G = 40-1000 s-1) and is consistent with ?2b?3-fibrinogen bond dynamics. Aggregation in parabolic flow resulted in the largest aggregates forming near the wall where shear rate and residence time were maximal, however intermediate regions between the wall and the center exhibited the highest aggregation rate due to depletion of reactants nearest the wall. Then, motivated by stenotic or valvular flows, we employed the LKMC simulation developed here for baffled geometries that exhibit regions of squeezing flow and standing recirculation zones. In these calculations, the largest aggregates were formed within the vortices (maximal residence time), while squeezing flow regions corresponded to zones of highest aggregation rate.
Numerical simulation of pulsed neutron induced gamma log using Monte Carlo method
NASA Astrophysics Data System (ADS)
Byeongho, Byeongho; Hwang, Seho; Shin, Jehyun; Park, Chang Je; Kim, Jongman; Kim, Ki-Seog
2015-04-01
Recently the neutron induced gamma log is the key role in shale play. This study was performed for understanding an energy characteristics spectrum of neutron induced gamma log using Monte Carlo method. A neutron generator which emits 14 MeV neutron particles was used. Flux of thermal neutron and capture gamma was calculated from detectors arranged at 10 cm intervals from neutron generator. Sandstone, limestone, granite, and basalt were selected to estimate and simulate response characteristics using MCNP. Also, the design for reducing effects of natural gamma (K, Th U) and back scattering was also applied to the sonde model in MCNP. Through results of energy spectrum analysis of capture gamma which detected to the detector in numerical sonde model, we knew that atoms which have wide neutron cross-section and are abundant in formation such as calcium, iron, silicon, magnesium, aluminium, hydrogen, and so forth were detected. Those results can help to design the optimal array of neutron and capture gamma detectors.
HRMC_1.1: Hybrid Reverse Monte Carlo method with silicon and carbon potentials
NASA Astrophysics Data System (ADS)
Opletal, G.; Petersen, T. C.; O'Malley, B.; Snook, I. K.; McCulloch, D. G.; Yarovsky, I.
2011-02-01
The Hybrid Reverse Monte Carlo (HRMC) code models the atomic structure of materials via the use of a combination of constraints including experimental diffraction data and an empirical energy potential. This energy constraint is in the form of either the Environment Dependent Interatomic Potential (EDIP) for carbon and silicon and the original and modified Stillinger-Weber potentials applicable to silicon. In this version, an update is made to correct an error in the EDIP carbon energy calculation routine. New version program summaryProgram title: HRMC version 1.1 Catalogue identifier: AEAO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 36 991 No. of bytes in distributed program, including test data, etc.: 907 800 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: Any computer capable of running executables produced by the g77 Fortran compiler. Operating system: Unix, Windows RAM: Depends on the type of empirical potential use, number of atoms and which constraints are employed. Classification: 7.7 Catalogue identifier of previous version: AEAO_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 777 Does the new version supersede the previous version?: Yes Nature of problem: Atomic modelling using empirical potentials and experimental data. Solution method: Monte Carlo Reasons for new version: An error in a term associated with the calculation of energies using the EDIP carbon potential which results in incorrect energies. Summary of revisions: Fix to correct brackets in the two body part of the EDIP carbon potential routine. Additional comments: The code is not standard FORTRAN 77 but includes some additional features and therefore generates errors when compiled using the Nag95 compiler. It does compile successfully with the GNU g77 compiler ( http://www.gnu.org/software/fortran/fortran.html). Running time: Depends on the type of empirical potential use, number of atoms and which constraints are employed. The test included in the distribution took 37 minutes on a DEC Alpha PC.
Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations
Bush, K.; Gagne, I. M.; Zavgorodni, S.; Ansbacher, W.; Beckham, W.
2011-04-15
Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm{sup 2}) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within {+-}1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within {+-}2.0%({sigma}{sub MC}=0.8%) in lung ({rho}=0.24 g cm{sup -3}) and within {+-}2.9%({sigma}{sub MC}=0.8%) in low-density lung ({rho}=0.1 g cm{sup -3}). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity ({rho}=0.001 g cm{sup -3}) were found to be within the range of {+-}1.5% to {+-}4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark ({sigma}{sub MC}=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within {+-}2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is capable of modeling radiotherapy dose deposition with accuracy only previously achievable with Monte Carlo techniques.
Inverse Monte-Carlo and Demon Methods for Effective Polyakov Loop Models of SU(N)-YM
Christian Wozar; Tobias Kaestner; Bjoern H. Wellegehausen; Andreas Wipf; Thomas Heinzl
2008-08-29
We study effective Polyakov loop models for SU(N) Yang-Mills theories at finite temperature. In particular effective models for SU(3) YM with an additional adjoint Polyakov loop potential are considered. The rich phase structure including a center and anti-center directed phase is reproduced with an effective model utilizing the inverse Monte-Carlo method. The demon method as a possibility to obtain the effective models' couplings is compared to the method of Schwinger-Dyson equations. Thermalization effects of microcanonical and canonical demon method are analyzed. Finally the elaborate canonical demon method is applied to the finite temperature SU(4) YM phase transition.
On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems
Walsh, Jon
2015-08-31
The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.
Parallel Markov Chain Monte Carlo Methods for Large Scale Statistical Inverse Problems
Wang, Kainan
2014-04-18
but also the uncertainty of these estimations. Markov chain Monte Carlo (MCMC) is a useful technique to sample the posterior distribution and information can be extracted from the sampled ensemble. However, MCMC is very expensive to compute, especially...
Monte Carlo particle-in-cell methods for the simulation of the Vlasov-Maxwell gyrokinetic equations
NASA Astrophysics Data System (ADS)
Bottino, A.; Sonnendrücker, E.
2015-10-01
> The particle-in-cell (PIC) algorithm is the most popular method for the discretisation of the general 6D Vlasov-Maxwell problem and it is widely used also for the simulation of the 5D gyrokinetic equations. The method consists of coupling a particle-based algorithm for the Vlasov equation with a grid-based method for the computation of the self-consistent electromagnetic fields. In this review we derive a Monte Carlo PIC finite-element model starting from a gyrokinetic discrete Lagrangian. The variations of the Lagrangian are used to obtain the time-continuous equations of motion for the particles and the finite-element approximation of the field equations. The Noether theorem for the semi-discretised system implies a certain number of conservation properties for the final set of equations. Moreover, the PIC method can be interpreted as a probabilistic Monte Carlo like method, consisting of calculating integrals of the continuous distribution function using a finite set of discrete markers. The nonlinear interactions along with numerical errors introduce random effects after some time. Therefore, the same tools for error analysis and error reduction used in Monte Carlo numerical methods can be applied to PIC simulations.
[Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].
Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao
2015-05-01
Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 °C. The boiling range of kerosene is 140~250 °C. And the boiling range of diesel is 180~370 °C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are selected for characteristic peak power integration to determine components content of mineral oil mixture of gasoline, kerosene and diesel by optimal algorithm. Compared with single point measurement of peak method and mean method, measurement sensitivity is improved about 50 times. The implementation of high precision measurement of mixture components content of gasoline, kerosene and diesel provides a practical algorithm for components content direct determination of spectra overlapping mixture without chemical separation. PMID:26415451
Adaptive {delta}f Monte Carlo Method for Simulation of RF-heating and Transport in Fusion Plasmas
Hoeoek, J.; Hellsten, T.
2009-11-26
Essential for modeling heating and transport of fusion plasma is determining the distribution function of the plasma species. Characteristic for RF-heating is creation of particle distributions with a high energy tail. In the high energy region the deviation from a Maxwellian distribution is large while in the low energy region the distribution is close to a Maxwellian due to the velocity dependency of the collision frequency. Because of geometry and orbit topology Monte Carlo methods are frequently used. To avoid simulating the thermal part, {delta}f methods are beneficial. Here we present a new {delta}f Monte Carlo method with an adaptive scheme for reducing the total variance and sources, suitable for calculating the distribution function for RF-heating.
NASA Astrophysics Data System (ADS)
Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan
2015-10-01
Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.
Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling
Kraan, Aafke Christine
2015-01-01
Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including ?+ emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects. PMID:26217586
Capote, Roberto Smith, Donald L.
2008-12-15
The Unified Monte Carlo method (UMC) has been suggested to avoid certain limitations and approximations inherent to the well-known Generalized Least Squares (GLS) method of nuclear data evaluation. This contribution reports on an investigation of the performance of the UMC method in comparison with the GLS method. This is accomplished by applying both methods to simple examples with few input values that were selected to explore various features of the evaluation process that impact upon the quality of an evaluation. Among the issues explored are: i) convergence of UMC results with the number of Monte Carlo histories and the ranges of sampled values; ii) a comparison of Monte Carlo sampling using the Metropolis scheme and a brute force approach; iii) the effects of large data discrepancies; iv) the effects of large data uncertainties; v) the effects of strong or weak model or experimental data correlations; and vi) the impact of ratio data and integral data. Comparisons are also made of the evaluated results for these examples when the input values are first transformed to comparable logarithmic values prior to performing the evaluation. Some general conclusions that are applicable to more realistic evaluation exercises are offered.
Beyond weak constraint 4DVAR: a bridge to Monte Carlo methods?
NASA Astrophysics Data System (ADS)
Cornford, Dan; Shen, Yuan; Vrettas, Michael; Opper, Manfred
2010-05-01
Data assimilation is often motivated from a Bayesian perspective, however most implementations introduce approximations based on a very small number of samples (ensemble Kalman filter / smoother) to perform a statistical linearisation of the system model, or seek an approximate mode of the posterior distribution (4DVAR). In statistics the alternative approaches are based on Monte Carlo sampling optimally using particle filters / smoothers or Langevin path sampling, neither of which are likely to scale well enough to be applied to realistic models in the near future. In this work we explain a new approach to data assimilation based on a variational treatment of the posterior distribution over paths. The method can be understood to be similar to a weak constraint 4DVAR where we seek the best approximating posterior distribution over paths rather than simply the most likely path. The method which we call Bayesian 4DVAR is based on the minimisation of the Kullback-Leibler divergence between distributions, and is suited to applications where simple additive model error in present as a random forcing in the system equations. The approximating distribution used is a Gaussian process, described by a time varying linear dynamical system, whose parameters form the control variables for the problem. We will outline how this approach can be seen as an extension to weak constraint 4DVAR, where additionally the posterior covariance is approximated. We illustrate the method in operation on a range of toy examples including Lorenz 40D and Kuramoto-Shivashinsky PDE examples. We compare the approach to ensemble and traditional 4DVAR approaches to data assimilation and show its limitations. A principle limitation is that the method systematically underestimates the marginal (with respect to time) state covariance, although we show empirically that this effect is minor given sufficient observations. We discuss possible extensions based on a mean field approximation that will allow the application of the method to large systems. We also show how a local parametrisation of the time varying state between observations using an orthogonal polynomial basis allows further reduction in the number of parameters that need to be estimated.
Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lössl, K.; Aebersold, D. M.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-03-15
Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan led to a similar homogeneity in the PTV compared to the standard treatment plan while the mean body dose was lower for the MERT plan. Regarding the second clinical case, a whole breast treatment, MERT resulted in a reduction of the lung volume receiving more than 45% of the prescribed dose when compared to the standard plan. On the other hand, the MERT plan leads to a larger low-dose lung volume and a degraded dose homogeneity in the PTV. For the clinical cases evaluated in this work, treatment plans using the BolusECT technique resulted in a more homogenous PTV and CTV coverage but higher doses to the OARs than the MERT plans. Conclusions: MERT treatments were successfully planned for phantom and clinical cases, applying a newly developed intuitive and efficient forward planning strategy that employs a MC based electron beam model for pMLC shaped electron beams. It is shown that MERT can lead to a dose reduction in OARs compared to other methods. The process of feathering MERT segments results in an improvement of the dose homogeneity in the PTV.
Recovering the inflationary potential: An analysis using flow methods and Markov chain Monte Carlo
NASA Astrophysics Data System (ADS)
Powell, Brian A.
Since its inception in 1980 by Guth [1], inflation has emerged as the dominant paradigm for describing the physics of the early universe. While inflation has matured theoretically over two decades, it has only recently begun to be rigorously tested observationally. Measurements of the cosmic microwave background (CMB) and large-scale structure surveys (LSS) have begun to unravel the mysteries of the inflationary epoch with exquisite and unprecedented accuracy. This thesis is a contribution to the effort of reconstructing the physics of inflation. This information is largely encoded in the potential energy function of the inflaton, the field that drives the inflationary expansion. With little theoretical guidance as to the probable form of this potential, reconstruction is a predominantly data-driven endeavor. This thesis presents an investigation of the constrainability of the inflaton potential given current CMB and LSS data. We develop a methodology based on the inflationary flow formalism that provides an assessment of our current ability to resolve the form of the inflaton potential in the face of experimental and statistical error. We find that there is uncertainty regarding the initial dynamics of the inflaton field, related to the poor constraints that can be drawn on the primordial power spectrum on large scales. We also investigate the future prospects of potential reconstruction, as might be expected when data from ESA's Planck Surveyor becomes available. We develop an approach that utilizes Markov chain Monte Carlo to analyze the statistical properties of the inflaton potential. Besides providing constraints on the parameters of the potential, this method makes it possible to perform model selection on the inflationary model space. While future data will likely determine the general features of the inflaton, there will likely be many different models that remain good fits to the data. Bayesian model selection will then be needed to draw comparisons between these different models in a statistically rigorous fashion.
NASA Astrophysics Data System (ADS)
Rost, D.; Blümer, N.
2015-09-01
We present an algorithm for the computation of unbiased Green functions and selfenergies for quantum lattice models, free from systematic errors and valid in the thermodynamic limit. The method combines direct lattice simulations using the Blankenbecler-Scalapino-Sugar quantum Monte Carlo (BSS-QMC) approach with controlled multigrid extrapolation techniques. We show that the half-filled Hubbard model is insulating at low temperatures even in the weak- coupling regime; the previously claimed Mott transition at intermediate coupling does not exist.
Williams, M. L.; Gehin, J. C.; Clarno, K. T.
2006-07-01
The TSUNAMI computational sequences currently in the SCALE 5 code system provide an automated approach to performing sensitivity and uncertainty analysis for eigenvalue responses, using either one-dimensional discrete ordinates or three-dimensional Monte Carlo methods. This capability has recently been expanded to address eigenvalue-difference responses such as reactivity changes. This paper describes the methodology and presents results obtained for an example advanced CANDU reactor design. (authors)
Geometrically-compatible 3-D Monte Carlo and discrete-ordinates methods
Morel, J.E.; Wareing, T.A.; McGhee, J.M.; Evans, T.M.
1998-12-31
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project was two-fold. The first purpose was to develop a deterministic discrete-ordinates neutral-particle transport scheme for unstructured tetrahedral spatial meshes, and implement it in a computer code. The second purpose was to modify the MCNP Monte Carlo radiation transport code to use adjoint solutions from the tetrahedral-mesh discrete-ordinates code to reduce the statistical variance of Monte Carlo solutions via a weight-window approach. The first task has resulted in a deterministic transport code that is much more efficient for modeling complex 3-D geometries than any previously existing deterministic code. The second task has resulted in a powerful new capability for dramatically reducing the cost of difficult 3-D Monte Carlo calculations.
NASA Astrophysics Data System (ADS)
Yesilyurt, Gokhan
Two of the primary challenges associated with the neutronic analysis of the Very High Temperature Reactor (VHTR) are accounting for resonance self-shielding in the particle fuel (contributing to the double heterogeneity) and accounting for temperature feedback due to Doppler broadening. The double heterogeneity challenge is addressed by defining a "double heterogeneity factor" (DHF) that allows conventional light water reactor (LWR) lattice physics codes to analyze VHTR configurations. The challenge of treating Doppler broadening is addressed by a new "on-the-fly" methodology that is applied during the random walk process with negligible impact on computational efficiency. Although this methodology was motivated by the need to treat temperature feedback in a VHTR, it is applicable to any reactor design. The on-the-fly Doppler methodology is based on a combination of Taylor and asymptotic series expansions. The type of series representation was determined by investigating the temperature dependence of U238 resonance cross sections in three regions: near the resonance peaks, mid-resonance, and the resonance wings. The coefficients for these series expansions were determined by regressions over the energy and temperature range of interest. The comparison of the broadened cross sections using this methodology with the NJOY cross sections was excellent. A Monte Carlo code was implemented to apply the combined regression model and used to estimate the additional computing cost which was found to be less than 1%. The DHF accounts for the effect of the particle heterogeneity on resonance absorption in particle fuel. The first level heterogeneity posed by the VHTR fuel particles is a unique characteristic that cannot be accounted for by conventional LWR lattice physics codes. On the other hand, Monte Carlo codes can take into account the detailed geometry of the VHTR including resolution of individual fuel particles without performing any type of resonance approximation. The DHF, basically a self shielding factor, was found to be weakly dependent on space and fuel depletion. The DHF only depends strongly on the packing fraction in a fuel compact. Therefore, it is proposed that DHFs be tabulated as a function of packing fraction to analyze the heterogeneous fuel in VHTR configuration with LWR lattice physics codes.
Use of Monte Carlo methods in environmental risk assessments at the INEL: Applications and issues
Harris, G.; Van Horn, R.
1996-06-01
The EPA is increasingly considering the use of probabilistic risk assessment techniques as an alternative or refinement of the current point estimate of risk. This report provides an overview of the probabilistic technique called Monte Carlo Analysis. Advantages and disadvantages of implementing a Monte Carlo analysis over a point estimate analysis for environmental risk assessment are discussed. The general methodology is provided along with an example of its implementation. A phased approach to risk analysis that allows iterative refinement of the risk estimates is recommended for use at the INEL.
Pauw, Brian R; Pedersen, Jan Skov; Tardif, Samuel; Takata, Masaki; Iversen, Bo B
2013-04-01
Monte Carlo (MC) methods, based on random updates and the trial-and-error principle, are well suited to retrieve form-free particle size distributions from small-angle scattering patterns of non-interacting low-concentration scatterers such as particles in solution or precipitates in metals. Improvements are presented to existing MC methods, such as a non-ambiguous convergence criterion, nonlinear scaling of contributions to match their observability in a scattering measurement, and a method for estimating the minimum visibility threshold and uncertainties on the resulting size distributions. PMID:23596341
Pauw, Brian R.; Pedersen, Jan Skov; Tardif, Samuel; Takata, Masaki; Iversen, Bo B.
2013-01-01
Monte Carlo (MC) methods, based on random updates and the trial-and-error principle, are well suited to retrieve form-free particle size distributions from small-angle scattering patterns of non-interacting low-concentration scatterers such as particles in solution or precipitates in metals. Improvements are presented to existing MC methods, such as a non-ambiguous convergence criterion, nonlinear scaling of contributions to match their observability in a scattering measurement, and a method for estimating the minimum visibility threshold and uncertainties on the resulting size distributions. PMID:23596341
Kumar, Sudhir; Srinivasan, P; Sharma, S D; Saxena, Sanjay Kumar; Bakshi, A K; Dash, Ashutosh; Babu, D A R; Sharma, D N
2015-09-01
Isotope production and Application Division of Bhabha Atomic Research Center developed (32)P patch sources for treatment of superficial tumors. Surface dose rate of a newly developed (32)P patch source of nominal diameter 25 mm was measured experimentally using standard extrapolation ionization chamber and Gafchromic EBT film. Monte Carlo model of the (32)P patch source along with the extrapolation chamber was also developed to estimate the surface dose rates from these sources. The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and radiochromic films are 82.03±4.18 (k=2) and 79.13±2.53 (k=2) respectively. The two values of the surface dose rates measured using the two independent experimental methods are in good agreement to each other within a variation of 3.5%. The surface dose rate to tissue (cGy/min) estimated using the MCNP Monte Carlo code works out to be 77.78±1.16 (k=2). The maximum deviation between the surface dose rates to tissue obtained by Monte Carlo and the extrapolation chamber method is 5.2% whereas the difference between the surface dose rates obtained by radiochromic film measurement and the Monte Carlo simulation is 1.7%. The three values of the surface dose rates of the (32)P patch source obtained by three independent methods are in good agreement to one another within the uncertainties associated with their measurements and calculation. This work has demonstrated that MCNP based electron transport simulations are accurate enough for determining the dosimetry parameters of the indigenously developed (32)P patch sources for contact brachytherapy applications. PMID:26086681
A Straightforward Approach to Markov Chain Monte Carlo Methods for Item Response Models.
ERIC Educational Resources Information Center
Patz, Richard J.; Junker, Brian W.
1999-01-01
Demonstrates Markov chain Monte Carlo (MCMC) techniques that are well-suited to complex models with Item Response Theory (IRT) assumptions. Develops an MCMC methodology that can be routinely implemented to fit normal IRT models, and compares the approach to approaches based on Gibbs sampling. Contains 64 references. (SLD)
An Evaluation of a Markov Chain Monte Carlo Method for the Two-Parameter Logistic Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of the Markov Chain Monte Carlo (MCMC) procedure Gibbs sampling was considered for estimation of item parameters of the two-parameter logistic model. Data for the Law School Admission Test (LSAT) Section 6 were analyzed to illustrate the MCMC procedure. In addition, simulated data sets were analyzed using the MCMC, marginal Bayesian…
Monte Carlo Method for Calculating the Electrostatic Energy of a Molecule
Mascagni, Michael
, and the energy is constructed. The estimate is based on the walk on spheres and Green's function first passage, coupled by boundary conditions. A Monte Carlo estimate for the potential point values, their derivatives a fundamental role in intermolecular interactions and to a large extent determines molecular properties [2, 13
Madsen, Jonathan R
2013-08-13
for predicting molecule-specific ionization, excitation, and scattering cross sections in the very low energy regime that can be applied in a condensed history Monte Carlo track-structure code. The present methodology begins with the calculation of a solution...
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
of uncertainty including the effects of aerosols, clouds, carbon cycle, ocean circulation (http repository and oil reservoir modelling Considerable uncertainty about porosity of rock Astronomy "Random Because of Multilevel Monte Carlo, this is changing and there are now several research groups using MLMC
The Monte Carlo EM method for the parameter estimation of biological models
Horváth, András
(CTMC) is appro- priate for its modeling. Further, we assume that the evolution of the system under of the underlying CTMC, it is convenient to use such a variant of the EM approach, namely the Monte Carlo EM (MCEM, in particular, by a continuous time Markov chain (CTMC). In order to have a complete description of the CTMC
Anderson, Eric C
2005-06-01
This article presents an efficient importance-sampling method for computing the likelihood of the effective size of a population under the coalescent model of Berthier et al. Previous computational approaches, using Markov chain Monte Carlo, required many minutes to several hours to analyze small data sets. The approach presented here is orders of magnitude faster and can provide an approximation to the likelihood curve, even for large data sets, in a matter of seconds. Additionally, confidence intervals on the estimated likelihood curve provide a useful estimate of the Monte Carlo error. Simulations show the importance sampling to be stable across a wide range of scenarios and show that the N(e) estimator itself performs well. Further simulations show that the 95% confidence intervals around the N(e) estimate are accurate. User-friendly software implementing the algorithm for Mac, Windows, and Unix/Linux is available for download. Applications of this computational framework to other problems are discussed. PMID:15834143
Development of CT scanner models for patient organ dose calculations using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Gu, Jianwei
There is a serious and growing concern about the CT dose delivered by diagnostic CT examinations or image-guided radiation therapy imaging procedures. To better understand and to accurately quantify radiation dose due to CT imaging, Monte Carlo based CT scanner models are needed. This dissertation describes the development, validation, and application of detailed CT scanner models including a GE LightSpeed 16 MDCT scanner and two image guided radiation therapy (IGRT) cone beam CT (CBCT) scanners, kV CBCT and MV CBCT. The modeling process considered the energy spectrum, beam geometry and movement, and bowtie filter (BTF). The methodology of validating the scanner models using reported CTDI values was also developed and implemented. Finally, the organ doses to different patients undergoing CT scan were obtained by integrating the CT scanner models with anatomically-realistic patient phantoms. The tube current modulation (TCM) technique was also investigated for dose reduction. It was found that for RPI-AM, thyroid, kidneys and thymus received largest dose of 13.05, 11.41 and 11.56 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. For RPI-AF, thymus, small intestine and kidneys received largest dose of 10.28, 12.08 and 11.35 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. The dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. For MDCT with TCM schemas, the fetal dose can be reduced with 14%-25%. To demonstrate the applicability of the method proposed in this dissertation for modeling the CT scanner, additional MDCT scanner was modeled and validated by using the measured CTDI values. These results demonstrated that the CT scanner models in this dissertation were versatile and accurate tools for estimating dose to different patient phantoms undergoing various CT procedures. The organ doses from kV and MV CBCT were also calculated. This dissertation finally summarizes areas where future research can be performed including MV CBCT further validation and application, dose reporting software and image and dose correlation study.
Ertekin, Elif
We present an approach to calculation of point-defect optical and thermal ionization energies based on the highly accurate quantum Monte Carlo methods. The use of an inherently many-body theory that directly treats electron ...
Wang Haifeng Popov, Pavel P.; Pope, Stephen B.
2010-03-01
We study a class of methods for the numerical solution of the system of stochastic differential equations (SDEs) that arises in the modeling of turbulent combustion, specifically in the Monte Carlo particle method for the solution of the model equations for the composition probability density function (PDF) and the filtered density function (FDF). This system consists of an SDE for particle position and a random differential equation for particle composition. The numerical methods considered advance the solution in time with (weak) second-order accuracy with respect to the time step size. The four primary contributions of the paper are: (i) establishing that the coefficients in the particle equations can be frozen at the mid-time (while preserving second-order accuracy), (ii) examining the performance of three existing schemes for integrating the SDEs, (iii) developing and evaluating different splitting schemes (which treat particle motion, reaction and mixing on different sub-steps), and (iv) developing the method of manufactured solutions (MMS) to assess the convergence of Monte Carlo particle methods. Tests using MMS confirm the second-order accuracy of the schemes. In general, the use of frozen coefficients reduces the numerical errors. Otherwise no significant differences are observed in the performance of the different SDE schemes and splitting schemes.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
Hall, Clifford; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 ; Ji, Weixiao; Blaisten-Barojas, Estela; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030
2014-02-01
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.
NASA Astrophysics Data System (ADS)
Wu, Di M.; Zhao, S. S.; Lu, Jun Q.; Hu, Xin-Hua
2000-06-01
In Monte Carlo simulations of light propagating in biological tissues, photons propagating in the media are described as classic particles being scattered and absorbed randomly in the media, and their path are tracked individually. To obtain any statistically significant results, however, a large number of photons is needed in the simulations and the calculations are time consuming and sometime impossible with existing computing resource, especially when considering the inhomogeneous boundary conditions. To overcome this difficulty, we have implemented a parallel computing technique into our Monte Carlo simulations. And this moment is well justified due to the nature of the Monte Carlo simulation. Utilizing the PVM (Parallel Virtual Machine, a parallel computing software package), parallel codes in both C and Fortran have been developed on the massive parallel computer of Cray T3E and a local PC-network running Unix/Sun Solaris. Our results show that parallel computing can significantly reduce the running time and make efficient usage of low cost personal computers. In this report, we present a numerical study of light propagation in a slab phantom of skin tissue using the parallel computing technique.
Investigation of Collimator Influential Parameter on SPECT Image Quality: a Monte Carlo Study
Banari Bahnamiri, Sh.
2015-01-01
Background Obtaining high quality images in Single Photon Emission Tomography (SPECT) device is the most important goal in nuclear medicine. Because if image quality is low, the possibility of making a mistake in diagnosing and treating the patient will rise. Studying effective factors in spatial resolution of imaging systems is thus deemed to be vital. One of the most important factors in SPECT imaging in nuclear medicine is the use of an appropriate collimator for a certain radiopharmaceutical feature in order to create the best image as it can be effective in the quantity of Full Width at Half Maximum (FWHM) which is the main parameter in spatial resolution. Method In this research, the simulation of the detector and collimator of SPECT imaging device, Model HD3 made by Philips Co. and the investigation of important factors on the collimator were carried out using MCNP-4c code. Results The results of the experimental measurments and simulation calculations revealed a relative difference of less than 5% leading to the confirmation of the accuracy of conducted simulation MCNP code calculation. Conclusion This is the first essential step in the design and modelling of new collimators used for creating high quality images in nuclear medicine. PMID:25973410
Safigholi, Habib; Faghihi, Reza; Jashni, Somaye Karimi; Meigooni, Ali S.
2012-04-15
Purpose: The goal of this study is to determine a method for Monte Carlo (MC) characterization of the miniature electronic brachytherapy x-ray sources (MEBXS) and to set dosimetric parameters according to TG-43U1 formalism. TG-43U1 parameters were used to get optimal designs of MEBXS. Parameters that affect the dose distribution such as anode shapes, target thickness, target angles, and electron beam source characteristics were evaluated. Optimized MEBXS designs were obtained and used to determine radial dose functions and 2D anisotropy functions in the electron energy range of 25-80 keV. Methods: Tungsten anode material was considered in two different geometries, hemispherical and conical-hemisphere. These configurations were analyzed by the 4C MC code with several different optimization techniques. The first optimization compared target thickness layers versus electron energy. These optimized thicknesses were compared with published results by Ihsan et al.[Nucl. Instrum. Methods Phys. Res. B 264, 371-377 (2007)]. The second optimization evaluated electron source characteristics by changing the cathode shapes and electron energies. Electron sources studied included; (1) point sources, (2) uniform cylinders, and (3) nonuniform cylindrical shell geometries. The third optimization was used to assess the apex angle of the conical-hemisphere target. The goal of these optimizations was to produce 2D-dose anisotropy functions closer to unity. An overall optimized MEBXS was developed from this analysis. The results obtained from this model were compared to known characteristics of HDR {sup 125}I, LDR {sup 103}Pd, and Xoft Axxent electronic brachytherapy source (XAEBS) [Med. Phys. 33, 4020-4032 (2006)]. Results: The optimized anode thicknesses as a function of electron energy is fitted by the linear equation Y ({mu}m) = 0.0459X (keV)-0.7342. The optimized electron source geometry is obtained for a disk-shaped parallel beam (uniform cylinder) with 0.9 mm radius. The TG-43 distribution is less sensitive to the shape of the conical-hemisphere anode than the hemispherical anode. However, the optimized apex angle of conical-hemisphere anode was determined to be 60 deg. For the hemispherical targets, calculated radial dose function values at a distance of 5 cm were 0.137, 0.191, 0.247, and 0.331 for 40, 50, 60, and 80 keV electrons, respectively. These values for the conical-hemisphere targets are 0.165, 0.239, 0.305, and 0.412, respectively. Calculated 2D anisotropy functions values for the hemispherical target shape were F(1 cm, 0 deg.) = 1.438 and F(1 cm, 0 deg.) = 1.465 for 30 and 80 keV electrons, respectively. The corresponding values for conical-hemisphere targets are 1.091 and 1.241, respectively. Conclusions: A method for the characterizations of MEBXS using TG-43U1 dosimetric data using the MC MCNP4C has been presented. The effects of target geometry, thicknesses, and electron source geometry have been investigated. The final choices of MEBXS design are conical-hemisphere target shapes having an apex angle of 60 deg. Tungsten material having an optimized thickness versus electron energy and a 0.9 mm radius of uniform cylinder as a cathode produces optimal electron source characteristics.
Tang, Ke; Zhang, Jinfeng; Liang, Jie
2014-04-01
Loops in proteins are flexible regions connecting regular secondary structures. They are often involved in protein functions through interacting with other molecules. The irregularity and flexibility of loops make their structures difficult to determine experimentally and challenging to model computationally. Conformation sampling and energy evaluation are the two key components in loop modeling. We have developed a new method for loop conformation sampling and prediction based on a chain growth sequential Monte Carlo sampling strategy, called Distance-guided Sequential chain-Growth Monte Carlo (DISGRO). With an energy function designed specifically for loops, our method can efficiently generate high quality loop conformations with low energy that are enriched with near-native loop structures. The average minimum global backbone RMSD for 1,000 conformations of 12-residue loops is 1:53 A° , with a lowest energy RMSD of 2:99 A° , and an average ensembleRMSD of 5:23 A° . A novel geometric criterion is applied to speed up calculations. The computational cost of generating 1,000 conformations for each of the x loops in a benchmark dataset is only about 10 cpu minutes for 12-residue loops, compared to ca 180 cpu minutes using the FALCm method. Test results on benchmark datasets show that DISGRO performs comparably or better than previous successful methods, while requiring far less computing time. DISGRO is especially effective in modeling longer loops (10-17 residues). PMID:24763317
NASA Astrophysics Data System (ADS)
Khisamutdinov, A. I.; Velker, N. N.
2014-05-01
The talk examines a system of pairwise interaction particles, which models a rarefied gas in accordance with the nonlinear Boltzmann equation, the master equations of Markov evolution of this system and corresponding numerical Monte Carlo methods. Selection of some optimal method for simulation of rarefied gas dynamics depends on the spatial size of the gas flow domain. For problems with the Knudsen number Kn of order unity "imitation", or "continuous time", Monte Carlo methods ([2]) are quite adequate and competitive. However if Kn <= 0.1 (the large sizes), excessive punctuality, namely, the need to see all the pairs of particles in the latter, leads to a significant increase in computational cost(complexity). We are interested in to construct the optimal methods for Boltzmann equation problems with large enough spatial sizes of the flow. Speaking of the optimal, we mean that we are talking about algorithms for parallel computation to be implemented on high-performance multi-processor computers. The characteristic property of large systems is the weak dependence of sub-parts of each other at a sufficiently small time intervals. This property is taken into account in the approximate methods using various splittings of operator of corresponding master equations. In the paper, we develop the approximate method based on the splitting of the operator of master equations system "over groups of particles" ([7]). The essence of the method is that the system of particles is divided into spatial subparts which are modeled independently for small intervals of time, using the precise"imitation" method. The type of splitting used is different from other well-known type "over collisions and displacements", which is an attribute of the known Direct simulation Monte Carlo methods. The second attribute of the last ones is the grid of the "interaction cells", which is completely absent in the imitation methods. The main of talk is parallelization of the imitation algorithms with splitting using the MPI library. New constructed algorithms are applied to solve the problems: on propagation of the temperature discontinuity and on plane Poiseuille flow in the field of external forces. In particular, on the basis of numerical solutions, comparative estimates of the computational cost are given for all algorithms under consideration.
NASA Astrophysics Data System (ADS)
Wang, Dong; Tse, Peter W.
2015-05-01
Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.
Nuclear Level Density of ${}^{161}$Dy in the Shell Model Monte Carlo Method
Özen, Cem; Nakada, Hitoshi
2012-01-01
We extend the shell-model Monte Carlo applications to the rare-earth region to include the odd-even nucleus ${}^{161}$Dy. The projection on an odd number of particles leads to a sign problem at low temperatures making it impractical to extract the ground-state energy in direct calculations. We use level counting data at low energies and neutron resonance data to extract the shell model ground-state energy to good precision. We then calculate the level density of ${}^{161}$Dy and find it in very good agreement with the level density extracted from experimental data.
Nuclear Level Density of ${}^{161}$Dy in the Shell Model Monte Carlo Method
Cem Özen; Yoram Alhassid; Hitoshi Nakada
2012-06-27
We extend the shell-model Monte Carlo applications to the rare-earth region to include the odd-even nucleus ${}^{161}$Dy. The projection on an odd number of particles leads to a sign problem at low temperatures making it impractical to extract the ground-state energy in direct calculations. We use level counting data at low energies and neutron resonance data to extract the shell model ground-state energy to good precision. We then calculate the level density of ${}^{161}$Dy and find it in very good agreement with the level density extracted from experimental data.
MC-Fit: using Monte-Carlo methods to get accurate confidence limits on enzyme parameters.
Dardel, F
1994-06-01
A program is described for estimating enzymatic parameters from experimental data using Apple Macintosh computers. MC-Fit uses iterative least-square fitting and Monte-Carlo sampling to get accurate estimates of the confidence limits. This approach is more robust than the conventional covariance matrix estimation, especially in cases where experimental data is partially lacking or when the standard error on individual measurements is large. This happens quite often when analysing the properties of variant enzymes obtained by mutagenesis, as these can have severely impaired activities and reduced affinities for their substrates. PMID:7922682
Betzler, Benjamin R.; Kiedrowski, Brian C.; Brown, Forrest B.; Martin, William R.
2015-08-28
The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing ? eigenvalues and eigenvectors in an infinite medium. In this study, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.
1990-01-01
Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Cemgil, A. Taylan
2003-01-01
Carlo Methods for Tempo Tracking and Rhythm Quantization Ali Taylan Cemgil cemgil@snn.kun.nl Bert Kappen denote the tempo. We formulate two well known music recognition problems, namely tempo tracking with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking
A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP)
NASA Astrophysics Data System (ADS)
Bitar, A.; Lisbona, A.; Thedrez, P.; Sai Maurel, C.; LeForestier, D.; Barbet, J.; Bardies, M.
2007-02-01
Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.
Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes
Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.
2002-09-11
The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.
NASA Astrophysics Data System (ADS)
Shahrabi, Mohammad; Tavakoli-Anbaran, Hossien
2015-02-01
Calculation of dosimetry parameters by TG-60 approach for beta sources and TG-43 approach for gamma sources can help to design brachytherapy sources. In this work, TG-60 dosimetry parameters are calculated for the Sm-153 brachytherapy seed using the Monte Carlo simulation approach. The continuous beta spectrum of Sm-153 and probability density are applied to simulate the Sm-153 source. Sm-153 is produced by neutron capture during the 152Sm( n,)153Sm reaction in reactors. The Sm-153 radionuclide decays by beta rays followed by gamma-ray emissions with half-life of 1.928 days. Sm-153 source is simulated in a spherical water phantom to calculate the deposited energy and geometry function in the intended points. The Sm-153 seed consists of 20% samarium, 30% calcium and 50% silicon, in cylindrical shape with density 1.76gr/cm^3. The anisotropy function and radial dose function were calculated at 0-4mm radial distances relative to the seed center and polar angles of 0-90 degrees. The results of this research are compared with the results of Taghdiri et al. (Iran. J. Radiat. Res. 9, 103 (2011)). The final beta spectrum of Sm-153 is not considered in their work. Results show significant relative differences even up to 5 times for anisotropy functions at 0.6, 1 and 2mm distances and some angles. MCNP4C Monte Carlo code is applied in both in the present paper and in the above-mentioned one.
Williams, Michael S; Ebel, Eric D
2014-11-18
The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the magnitude of biases in an estimator that ignores the effects of an unequal probability sample design. PMID:25333423
Modeling of radiation-induced bystander effect using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun
2009-03-01
Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.
The Acceptance Probability of the Hybrid Monte Carlo Method in High-Dimensional Problems
NASA Astrophysics Data System (ADS)
Beskos, A.; Pillai, N. S.; Roberts, G. O.; Sanz-Serna, J. M.; Stuart, A. M.
2010-09-01
We investigate the properties of the Hybrid Monte-Carlo algorithm in high dimensions. In the simplified scenario of independent, identically distributed components, we prove that, to obtain an G(1) acceptance probability as the dimension d of the state space tends to ?, the Verlet/leap-frog step-size h should be scaled as h = ?×d-1/4. We also identify analytically the asymptotically optimal acceptance probability, which turns out to be 0.651 (with three decimal places); this is the choice that optimally balances the cost of generating a proposal, which decreases as ? increases, against the cost related to the average number of proposals required to obtain acceptance, which increases as ? increases.
Comparing methods and Monte Carlo algorithms at phase transition regimes: A general overview
NASA Astrophysics Data System (ADS)
Fiore, Carlos E.
2014-03-01
Although numerical simulations constitute one of the most important tools in statistical mechanics, in practice the things are not so simple. Standard commonly used algorithms lead to well known difficulties at phase transition regimes, hence avoiding the achievement of precise thermodynamic quantities. In the last years, several approaches have been proposed in order to circumvent such difficulties. With these concepts in mind, here we present a comparison among distinct Monte Carlo algorithms, analyzing their efficiency and reliability. We show that their difficulties are substantially reduced when proper approaches for phase transitions are used. We illustrate the main concepts and ideas in the Blume-Emery-Griffiths (BEG) model, that displays strong first-order transitions in and second-order transitions low and high temperatures, respectively.
Torsional path integral Monte Carlo method for the quantum simulation of large molecules
NASA Astrophysics Data System (ADS)
Miller, Thomas F.; Clary, David C.
2002-05-01
A molecular application is introduced for calculating quantum statistical mechanical expectation values of large molecules at nonzero temperatures. The Torsional Path Integral Monte Carlo (TPIMC) technique applies an uncoupled winding number formalism to the torsional degrees of freedom in molecular systems. The internal energy of the molecules ethane, n-butane, n-octane, and enkephalin are calculated at standard temperature using the TPIMC technique and compared to the expectation values obtained using the harmonic oscillator approximation and a variational technique. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the TPIMC technique. The harmonic oscillator approximation approach to calculating the internal energy performs well for the molecules presented in this study but is limited by its neglect of both anharmonicity effects and the potential coupling of intramolecular torsions.
Anti-confocal versus confocal assessment of the middle ear simulated by Monte Carlo methods
Jung, David S.; Crowe, John A.; Birchall, John P.; Somekh, Michael G.; See, Chung W.
2015-01-01
The ability to monitor the inflammatory state of the middle ear mucosa would provide clinical utility. To enable spectral measurements on the mucosa whilst rejecting background signal from the eardrum an anti-confocal system is investigated. In contrast to the central pinhole in a confocal system the anti-confocal system uses a central stop to reject light from the in-focus plane, the eardrum, with all other light detected. Monte Carlo simulations of this system show an increase in detected signal and improved signal-to-background ratio compared to a conventional confocal set-up used to image the middle ear mucosa. System parameters are varied in the simulation and their influence on the level of background rejection are presented. PMID:26504633
Stoller, Roger E; Golubov, Stanislav I; Becquart, C. S.; Domain, C.
2007-08-01
The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory, Monte Carlo, or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales, and timescales from those characteristic of diffusion to long-term microstructural evolution (~?s to years). Although the rate theory and Monte Carlo models can be used simulate the same phenomena, some of the details are handled quite differently in the two approaches. Models employing the rate theory have been extensively used to describe radiation-induced phenomena such as void swelling and irradiation creep. The primary approximations in such models are time- and spatial averaging of the radiation damage source term, and spatial averaging of the microstructure into an effective medium. Kinetic Monte Carlo models can account for these spatial and temporal correlations; their primary limitation is the computational burden which is related to the size of the simulation cell. A direct comparison of RT and object kinetic MC simulations has been made in the domain of point defect cluster dynamics modeling, which is relevant to the evolution (both nucleation and growth) of radiation-induced defect structures. The primary limitations of the OKMC model are related to computational issues. Even with modern computers, the maximum simulation cell size and the maximum dose (typically much less than 1 dpa) that can be simulated are limited. In contrast, even very detailed RT models can simulate microstructural evolution for doses up 100 dpa or greater in clock times that are relatively short. Within the context of the effective medium, essentially any defect density can be simulated. Overall, the agreement between the two methods is best for irradiation conditions which produce a high density of defects (lower temperature and higher displacement rate), and for materials that have a relatively high density of fixed sinks such as dislocations.
Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
NASA Astrophysics Data System (ADS)
Densmore, J. D.; Park, H.; Wollaber, A. B.; Rauenzahn, R. M.; Knoll, D. A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption-emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck-Cummings algorithm.
NASA Astrophysics Data System (ADS)
Cerjanic, Alexander M.
The development of a spectral domain method of moments code for the modeling of single layer microstrip patch antennas is presented in this thesis. The mixed potential integral equation formulation of Maxwell's equations is used as the theoretical basis for the work, and is solved via the method of moments. General-purpose graphics processing units are used for the computation of the impedance matrix by incorporation of quasi-Monte Carlo integration. The development of the various components of the code, including Green's function, impedance matrix, and excitation vector modules are discussed with individual test cases for the major code modules. The integrated code was tested by modeling a suite of four coaxially probe fed circularly polarized single layer microstrip patch antennas and the computed results are compared to those obtained by measurement. Finally, a study examining the relationship between design parameters and S11 performance was undertaken using the code.
Goerner, K. . Inst. fuer Verfahrenstechnik und Dampfkesselwesen); Dietz, U. )
1993-04-01
The application of the Monte-Carlo method to the calculation of radiation exchange processes in combustion systems is discussed. After a brief introduction, the modeling of radiation exchange and the optical properties of the combustion-chamber suspension are described. The application of the method of technical-scale systems is illustrated for large-scale coal- and lignite-fired combustion plants. Flow and heat release are approximated to reduce the computational effort, and to achieve industrial relevance. The simulated results are in good agreement with available data. Coupling with complete flame and combustion-chamber models, in which the turbulent two-phase flow and local heat release are calculated, is discussed, and found to be feasible.
Dupuis, Paul
2014-03-14
This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.
Dieudonne, C.; Dumonteil, E.; Malvagi, F.; Diop, C. M.
2013-07-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple a Monte Carlo code to simulate the neutron transport to a deterministic method that computes the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3 dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the time-expensive Monte Carlo solver called at each time step. Therefore, great improvements in term of calculation time could be expected if one could get rid of Monte Carlo transport sequences. For example, it may seem interesting to run an initial Monte Carlo simulation only once, for the first time/burnup step, and then to use the concentration perturbation capability of the Monte Carlo code to replace the other time/burnup steps (the different burnup steps are seen like perturbations of the concentrations of the initial burnup step). This paper presents some advantages and limitations of this technique and preliminary results in terms of speed up and figure of merit. Finally, we will detail different possible calculation scheme based on that method. (authors)
Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V
2006-12-31
Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)
NASA Astrophysics Data System (ADS)
Srinivasan, P.; Priya, S.; Patel, Tarun; Gopalakrishnan, R. K.; Sharma, D. N.
2015-01-01
DD/DT fusion neutron generators are used as sources of 2.5 MeV/14.1 MeV neutrons in experimental laboratories for various applications. Detailed knowledge of the radiation dose rates around the neutron generators are essential for ensuring radiological protection of the personnel involved with the operation. This work describes the experimental and Monte Carlo studies carried out in the Purnima Neutron Generator facility of the Bhabha Atomic Research Center (BARC), Mumbai. Verification and validation of the shielding adequacy was carried out by measuring the neutron and gamma dose-rates at various locations inside and outside the neutron generator hall during different operational conditions both for 2.5-MeV and 14.1-MeV neutrons and comparing with theoretical simulations. The calculated and experimental dose rates were found to agree with a maximum deviation of 20% at certain locations. This study has served in benchmarking the Monte Carlo simulation methods adopted for shield design of such facilities. This has also helped in augmenting the existing shield thickness to reduce the neutron and associated gamma dose rates for radiological protection of personnel during operation of the generators at higher source neutron yields up to 1 × 1010 n/s.
NASA Astrophysics Data System (ADS)
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.
2015-07-01
Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.
Nease, Brian R. Ueki, Taro
2009-12-10
A time series approach has been applied to the nuclear fission source distribution generated by Monte Carlo (MC) particle transport in order to calculate the non-fundamental mode eigenvalues of the system. The novel aspect is the combination of the general technical principle of projection pursuit for multivariate data with the neutron multiplication eigenvalue problem in the nuclear engineering discipline. Proof is thoroughly provided that the stationary MC process is linear to first order approximation and that it transforms into one-dimensional autoregressive processes of order one (AR(1)) via the automated choice of projection vectors. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern MC codes for nuclear criticality calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. This time series approach was tested for a variety of problems including multi-dimensional ones. Numerical results show that the time series approach has strong potential for three dimensional whole reactor core. The eigenvalue ratio can be updated in an on-the-fly manner without storing the nuclear fission source distributions at all previous iteration cycles for the mean subtraction. Lastly, the effects of degenerate eigenvalues are investigated and solutions are provided.
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange
Hula, Andreas; Montague, P. Read; Dayan, Peter
2015-01-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent’s preference for equity with their partner, beliefs about the partner’s appetite for equity, beliefs about the partner’s model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429
NASA Astrophysics Data System (ADS)
Bui, Khoa; Papavassiliou, Dimitrios
2012-02-01
The effective thermal conductivity (Keff) of carbon nanotube (CNT) composites is affected by the thermal boundary resistance (TBR) and by the dispersion pattern and geometry of the CNTs. We have previously modeled CNTs as straight cylinders and found that the TBR between CNTs (TBRCNT-CNT) can suppress Keff at high volume fractions of CNTs [1]. Effective medium theory results assume that the CNTs are in a perfect dispersion state and exclude the TBRCNT-CNT [2]. In this work, we report on the development of an algorithm for generating CNTs with worm-like geometry in 3D, and with different persistence lengths. These worm-like CNTs are then randomly placed in a periodic box representing a realistic state, since the persistence length of a CNT can be obtained from microscopic images. The use of these CNT geometries in conjunction with off-lattice Monte Carlo simulations [1] in order to study the effective thermal properties of nanocomposites will be discussed, as well as the effects of the persistence length on Keff and comparisons to straight cylinder models. References [1] K. Bui, B.P. Grady, D.V. Papavassiliou, Chem. Phys. Let., 508(4-6), 248-251, 2011 [2] C.W. Nan, G. Liu, Y. Lin, M. Li, App. Phys. Let., 85(16), 3549-3551, 2006
Modelling of white paints optical degradation using Mie's theory and Monte Carlo method
NASA Astrophysics Data System (ADS)
Duvignacq, Carole; Hespel, Laurent; Roze, Claude; Girasole, Thierry
2003-09-01
During long term missions, white paints, used as thermal control coatings on satellites, are severely damaged by the effect of space environment. Reflectance spectra, showing broad absorption bands, are characteristic of the coatings optical degradation. In this paper, a numerical model simulating optical degradation of white paints is presented. This model uses Mie's theory, coupled with a random walk Monte Carlo procedure. With materials like white paints, we are faced to several major difficulties: high pigment charging rate, binder absorption, etc.. The problem is even worse in the case of irradiated paints. In parallel with the description of the basis of the model, we will make an overview of the encountered problems. Simulation results are presented and discussed, in the case of zinc oxide/PDMS type white paints, irradiated by 45 keV protons, in accordance with geostationary orbit environment conditions. The effects of the optical properties of the pigment, the pigment volume concentration, the absorption by the binder on hemispherical reflectance are examined. Comparisons are made with experimental results, and the interest of such a numerical code for the study of high charged materials degradation is discussed.
Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method
NASA Astrophysics Data System (ADS)
Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin
2015-07-01
The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)
NASA Technical Reports Server (NTRS)
Tsang, L.; Lou, S. H.; Chan, C. H.
1991-01-01
The extended boundary condition method is applied to Monte Carlo simulations of two-dimensional random rough surface scattering. The numerical results are compared with one-dimensional random rough surfaces obtained from the finite-element method. It is found that the mean scattered intensity from two-dimensional rough surfaces differs from that of one dimension for rough surfaces with large slopes.
NASA Astrophysics Data System (ADS)
Makri, T.; Yakoumakis, E.; Papadopoulou, D.; Gialousis, G.; Theodoropoulos, V.; Sandilos, P.; Georgiou, E.
2006-10-01
Seeking to assess the radiation risk associated with radiological examinations in neonatal intensive care units, thermo-luminescence dosimetry was used for the measurement of entrance surface dose (ESD) in 44 AP chest and 28 AP combined chest-abdominal exposures of a sample of 60 neonates. The mean values of ESD were found to be equal to 44 ± 16 µGy and 43 ± 19 µGy, respectively. The MCNP-4C2 code with a mathematical phantom simulating a neonate and appropriate x-ray energy spectra were employed for the simulation of the AP chest and AP combined chest-abdominal exposures. Equivalent organ dose per unit ESD and energy imparted per unit ESD calculations are presented in tabular form. Combined with ESD measurements, these calculations yield an effective dose of 10.2 ± 3.7 µSv, regardless of sex, and an imparted energy of 18.5 ± 6.7 µJ for the chest radiograph. The corresponding results for the combined chest-abdominal examination are 14.7 ± 7.6 µSv (males)/17.2 ± 7.6 µSv (females) and 29.7 ± 13.2 µJ. The calculated total risk per radiograph was low, ranging between 1.7 and 2.9 per million neonates, per film, and being slightly higher for females. Results of this study are in good agreement with previous studies, especially in view of the diversity met in the calculation methods.
NASA Technical Reports Server (NTRS)
Jensen, K. A.; Ripoll, J.-F.; Wray, A. A.; Joseph, D.; ElHafi, M.
2004-01-01
Five computational methods for solution of the radiative transfer equation in an absorbing-emitting and non-scattering gray medium were compared on a 2 m JP-8 pool fire. The temperature and absorption coefficient fields were taken from a synthetic fire due to the lack of a complete set of experimental data for fires of this size. These quantities were generated by a code that has been shown to agree well with the limited quantity of relevant data in the literature. Reference solutions to the governing equation were determined using the Monte Carlo method and a ray tracing scheme with high angular resolution. Solutions using the discrete transfer method, the discrete ordinate method (DOM) with both S(sub 4) and LC(sub 11) quadratures, and moment model using the M(sub 1) closure were compared to the reference solutions in both isotropic and anisotropic regions of the computational domain. DOM LC(sub 11) is shown to be the more accurate than the commonly used S(sub 4) quadrature technique, especially in anisotropic regions of the fire domain. This represents the first study where the M(sub 1) method was applied to a combustion problem occurring in a complex three-dimensional geometry. The M(sub 1) results agree well with other solution techniques, which is encouraging for future applications to similar problems since it is computationally the least expensive solution technique. Moreover, M(sub 1) results are comparable to DOM S(sub 4).
Yamamoto, Takehisa; Tsutsui, Toshiyuki; Nishiguchi, Akiko; Kobayashi, Sota; Tsukamoto, Kenji; Saito, Takehiko; Mase, Masaji; Okamatsu, Masatoshi
2007-06-01
In June 2005, an outbreak of avian influenza (AI) caused by a low pathogenic H5N2 virus was identified in Japan. A serological surveillance was conducted because the infected chickens did not show any clinical signs. The Markov Chain Monte Carlo Method was used to evaluate the performances of serological HI and AGP tests because there was not enough time when the surveillance was initiated to conduct a test evaluation. The sensitivity of the AGP test (0.67) was lower than that of the HI test (0.99), while the specificities were high for both tests (0.96 for AGP and 0.90 for HI). Based on the low sensitivity of the AGP test, the HI test was used for primary screening in later stages of the epidemic. PMID:17611370
NASA Astrophysics Data System (ADS)
Luyten, J.; Creemers, C.
2008-07-01
Recently, new parameters for the modified embedded atom method (MEAM) were derived for the ternary Pt-Pd-Rh system. In this work, this validated potential is used in conjunction with Monte Carlo (MC) simulations to study segregation to the (1 1 1) surface for the entire phase diagram of this ternary system. At 1400 K, these simulations reveal two distinct regions. In the major part of the phase diagram, Pd is the segregating component. However, close to the binary Pt-Rh axis, a region is observed in which Pt and Pd co-segregate to the surface. This co-segregation occurs only at higher temperatures as it is the result of two competing exothermic segregation reactions.
NASA Astrophysics Data System (ADS)
Luyten, Jan; Schurmans, Maarten; Creemers, Claude; Bunnik, Bouke S.; Kramer, Gert Jan
2007-04-01
In this work, surface segregation in Pt 25Rh 75 alloys is studied by Monte Carlo (MC) simulations, combined with the modified embedded atom method (MEAM). First, for a more accurate description of the interatomic interactions, new MEAM parameters are derived, based on ab initio density functional theory (DFT) data. Subsequently, the temperature dependent surface segregation to the low index single crystal surfaces of a Pt 25Rh 75 alloy is calculated with this new parameter set. The simulation results are then confronted with available experimental and theoretical work. A peculiarity of the Pt-Rh system is the possible presence of a bulk demixing region at lower temperatures. This demixing behaviour is still contested up to now. Our results are in contradiction with such a phase separation behaviour.
Wirawan, Rahadi; Waris, Abdul; Djamal, Mitra; Handayani, Gunawan
2015-04-16
The spectrum of gamma energy absorption in the NaI crystal (scintillation detector) is the interaction result of gamma photon with NaI crystal, and it’s associated with the photon gamma energy incoming to the detector. Through a simulation approach, we can perform an early observation of gamma energy absorption spectrum in a scintillator crystal detector (NaI) before the experiment conducted. In this paper, we present a simulation model result of gamma energy absorption spectrum for energy 100-700 keV (i.e. 297 keV, 400 keV and 662 keV). This simulation developed based on the concept of photon beam point source distribution and photon cross section interaction with the Monte Carlo method. Our computational code has been successfully predicting the multiple energy peaks absorption spectrum, which derived from multiple photon energy sources.
Ohgoe, Takahiro; Kawashima, Naoki
2011-02-15
We study the supercounterfluid (SCF) states in the two-component hard-core Bose-Hubbard model on a square lattice, using the quantum Monte Carlo method based on the worm (directed-loop) algorithm. Since the SCF state is a state of a pair condensation characterized by {ne}0,=0, and =0, where a and b are the order parameters of the two components, it is important to study behaviors of the pair-correlation function . For this purpose, we propose a choice of the worm head for calculating the pair-correlation function. From this pair correlation, we confirm the Kosterlitz-Thouless character of the SCF phase. The simulation efficiency is also improved in the SCF phase.
Binding and Diffusion of Lithium in Graphite: Quantum Monte Carlo Benchmarks and Validation of van der Waals Density Functional Methods.
Ganesh, P; Kim, Jeongnim; Park, Changwon; Yoon, Mina; Reboredo, Fernando A; Kent, Paul R C
2014-12-01
Highly accurate diffusion quantum Monte Carlo (QMC) studies of the adsorption and diffusion of atomic lithium in AA-stacked graphite are compared with van der Waals-including density functional theory (DFT) calculations. Predicted QMC lattice constants for pure AA graphite agree with experiment. Pure AA-stacked graphite is shown to challenge many van der Waals methods even when they are accurate for conventional AB graphite. Highest overall DFT accuracy, considering pure AA-stacked graphite as well as lithium binding and diffusion, is obtained by the self-consistent van der Waals functional vdW-DF2, although errors in binding energies remain. Empirical approaches based on point charges such as DFT-D are inaccurate unless the local charge transfer is assessed. The results demonstrate that the lithium-carbon system requires a simultaneous highly accurate description of both charge transfer and van der Waals interactions, favoring self-consistent approaches. PMID:26583215
Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124
NASA Astrophysics Data System (ADS)
Moreau, M.; Buvat, I.; Ammour, L.; Chouin, N.; Kraeber-Bodéré, F.; Chérel, M.; Carlier, T.
2015-03-01
Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM.
Monte Carlo Method in optical diagnostics of skin and skin tissues
NASA Astrophysics Data System (ADS)
Meglinski, Igor V.
2003-12-01
A novel Monte Carlo (MC) technique for photon migration through 3D media with the spatially varying optical properties is presented. The employed MC technique combines the statistical weighting variance reduction and real photon paths tracing schemes. The overview of the results of applications of the developed MC technique in optical/near-infrared reflectance spectroscopy, confocal microscopy, fluorescence spectroscopy, OCT, Doppler flowmetry and Diffusing Wave Spectroscopy (DWS) are presented. In frame of the model skin represents as a complex inhomogeneous multi-layered medium, where the spatial distribution of blood and chromophores are variable within the depth. Taking into account variability of cells structure we represent the interfaces of skin layers as a quasi-random periodic wavy surfaces. The rough boundaries between the layers of different refractive indices play a significant role in the distribution of photons within the medium. The absorption properties of skin tissues in visible and NIR spectral region are estimated by taking into account the anatomical structure of skin as determined from histology, including the spatial distribution of blood vessels, water and melanin content. Model takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. Reasonable estimations for skin blood oxygen saturation and haematocrit are also included. The model is validated against analytic solution of the photon diffusion equation for semi-infinite homogeneous highly scattering medium. The results demonstrate that matching of the refractive index of the medium significantly improves the contrast and spatial resolution of the spatial photon sensitivity profile. It is also demonstrated that when model supplied with reasonable physical and structural parameters of biological tissues the results of skin reflectance spectra simulation agrees reasonably well with the results of in vivo skin spectra measurements.
NASA Astrophysics Data System (ADS)
Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.
2014-05-01
Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution provided a flow velocity-depth damage curve for a specific land use. More specifically, each WMCLR code execution for the agricultural sector generated a damage curve for a specific crop and for every month of the year, thus relating the damage to any crop with floodwater depth, flow velocity and the growth phase of the crop at the time of flooding. Respectively, each WMCLR code execution for the urban sector developed a damage curve for a specific building type, relating structural damage with floodwater depth and velocity. Furthermore, two techno-economic models were developed in Python programming language, in order to estimate monetary values of flood damages to the rural and the urban sector, respectively. A new Monte Carlo simulation was performed, consisting of multiple executions of the techno-economic code, which generated multiple damage cost estimates. Each execution used the proper WMCLR simulated damage curve. The uncertainty analysis of the damage estimates established the accuracy and reliability of the proposed methodology for the synthetic damage curves' development.
AN ASSESSMENT OF MCNP WEIGHT WINDOWS
J. S. HENDRICKS; C. N. CULBERTSON
2000-01-01
The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight windows as well as the former MCNP4B treatment? (2) Does the new MCNP4C weight window generator generate importance functions as well as MCNP4B? (3) How do superimposed mesh weight windows compare to cell-based weight windows? (4) What are the shortcomings of the new MCNP4C weight window generator? Our assessment was carried out with five neutron and photon shielding problems chosen for their demanding variance reduction requirements. The problems were an oil well logging problem, the Oak Ridge fusion shielding benchmark problem, a photon skyshine problem, an air-over-ground problem, and a sample problem for variance reduction.
NASA Astrophysics Data System (ADS)
Miller, G. L.; Lu, D.; Ye, M.; Curtis, G. P.; Mendes, B. S.; Draper, D.
2010-12-01
Parametric uncertainty in groundwater modeling is commonly assessed using the first-order-second-moment method, which yields the linear confidence/prediction intervals. More advanced techniques are able to produce the nonlinear confidence/prediction intervals that are more accurate than the linear intervals for nonlinear models. However, both the methods are restricted to certain assumptions such as normality in model parameters. We developed a Markov Chain Monte Carlo (MCMC) method to directly investigate the parametric distributions and confidence/prediction intervals. The MCMC results are used to evaluate accuracy of the linear and nonlinear confidence/prediction intervals. The MCMC method is applied to nonlinear surface complexation models developed by Kohler et al. (1996) to simulate reactive transport of uranium (VI). The breakthrough data of Kohler et al. (1996) obtained from a series of column experiments are used as the basis of the investigation. The calibrated parameters of the models are the equilibrium constants of the surface complexation reactions and fractions of functional groups. The Morris method sensitivity analysis shows that all of the parameters exhibit highly nonlinear effects on the simulation. The MCMC method is combined with traditional optimization method to improve computational efficiency. The parameters of the surface complexation models are first calibrated using a global optimization technique, multi-start quasi-Newton BFGS, which employs an approximation to the Hessian. The parameter correlation is measured by the covariance matrix computed via the Fisher information matrix. Parameter ranges are necessary to improve convergence of the MCMC simulation, even when the adaptive Metropolis method is used. The MCMC results indicate that the parameters do not necessarily follow a normal distribution and that the nonlinear intervals are more accurate than the linear intervals for the nonlinear surface complexation models. In comparison with the linear and nonlinear prediction intervals, the prediction intervals of MCMC are more robust to simulate the breakthrough curves that are not used for the parameter calibration and estimation of parameter distributions.
NASA Technical Reports Server (NTRS)
Palmer, Grant; Prabhu, Dinesh; Cruden, Brett A.
2013-01-01
The 2013-2022 Decaedal survey for planetary exploration has identified probe missions to Uranus and Saturn as high priorities. This work endeavors to examine the uncertainty for determining aeroheating in such entry environments. Representative entry trajectories are constructed using the TRAJ software. Flowfields at selected points on the trajectories are then computed using the Data Parallel Line Relaxation (DPLR) Computational Fluid Dynamics Code. A Monte Carlo study is performed on the DPLR input parameters to determine the uncertainty in the predicted aeroheating, and correlation coefficients are examined to identify which input parameters show the most influence on the uncertainty. A review of the present best practices for input parameters (e.g. transport coefficient and vibrational relaxation time) is also conducted. It is found that the 2(sigma) - uncertainty for heating on Uranus entry is no more than 2.1%, assuming an equilibrium catalytic wall, with the uncertainty being determined primarily by diffusion and H(sub 2) recombination rate within the boundary layer. However, if the wall is assumed to be partially or non-catalytic, this uncertainty may increase to as large as 18%. The catalytic wall model can contribute over 3x change in heat flux and a 20% variation in film coefficient. Therefore, coupled material response/fluid dynamic models are recommended for this problem. It was also found that much of this variability is artificially suppressed when a constant Schmidt number approach is implemented. Because the boundary layer is reacting, it is necessary to employ self-consistent effective binary diffusion to obtain a correct thermal transport solution. For Saturn entries, the 2(sigma) - uncertainty for convective heating was less than 3.7%. The major uncertainty driver was dependent on shock temperature/velocity, changing from boundary layer thermal conductivity to diffusivity and then to shock layer ionization rate as velocity increases. While radiative heating for Uranus entry was negligible, the nominal solution for Saturn computed up to 20% radiative heating at the highest velocity examined. The radiative heating followed a non-normal distribution, with up to a 3x variation in magnitude. This uncertainty is driven by the H(sub 2) dissociation rate, as H(sub 2) that persists in the hot non-equilibrium zone contributes significantly to radiation.
ERIC Educational Resources Information Center
Carsey, Thomas M.; Harden, Jeffrey J.
2015-01-01
Graduate students in political science come to the discipline interested in exploring important political questions, such as "What causes war?" or "What policies promote economic growth?" However, they typically do not arrive prepared to address those questions using quantitative methods. Graduate methods instructors must…
Shi, C. Y.; Xu, X. George; Stabin, Michael G.
2008-07-15
Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.
NASA Astrophysics Data System (ADS)
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo H.; Good, Brian; Noebe, Ronald D.; Honecy, Frank; Abel, Phillip
1999-01-01
Large-scale simulations of dynamic processes at the atomic level have developed into one of the main areas of work in computational materials science. Until recently, severe computational restrictions, as well as the lack of accurate methods for calculating the energetics, resulted in slower growth in the area than that required by current alloy design programs. The Computational Materials Group at the NASA Lewis Research Center is devoted to the development of powerful, accurate, economical tools to aid in alloy design. These include the BFS (Bozzolo, Ferrante, and Smith) method for alloys (ref. 1) and the development of dedicated software for large-scale simulations based on Monte Carlo- Metropolis numerical techniques, as well as state-of-the-art visualization methods. Our previous effort linking theoretical and computational modeling resulted in the successful prediction of the microstructure of a five-element intermetallic alloy, in excellent agreement with experimental results (refs. 2 and 3). This effort also produced a complete description of the role of alloying additions in intermetallic binary, ternary, and higher order alloys (ref. 4).
NASA Astrophysics Data System (ADS)
Querol, A.; Gallardo, S.; Ródenas, J.; Verdú, G.
2015-11-01
In environmental radioactivity measurements, High Purity Germanium (HPGe) detectors are commonly used due to their excellent resolution. Efficiency calibration of detectors is essential to determine activity of radionuclides. The Monte Carlo method has been proved to be a powerful tool to complement efficiency calculations. In aged detectors, efficiency is partially deteriorated due to the dead layer increasing and consequently, the active volume decreasing. The characterization of the radiation transport in the dead layer is essential for a realistic HPGe simulation. In this work, the MCNP5 code is used to calculate the detector efficiency. The F4MESH tally is used to determine the photon and electron fluence in the dead layer and the active volume. The energy deposited in the Ge has been analyzed using the *F8 tally. The F8 tally is used to obtain spectra and to calculate the detector efficiency. When the photon fluence and the energy deposition in the crystal are known, some unfolding methods can be used to estimate the activity of a given source. In this way, the efficiency is obtained and serves to verify the value obtained by other methods.
Novel phase-space Monte-Carlo method for quench dynamics in 1D and 2D spin models
NASA Astrophysics Data System (ADS)
Pikovski, Alexander; Schachenmayer, Johannes; Rey, Ana Maria
2015-05-01
An important outstanding problem is the effcient numerical computation of quench dynamics in large spin systems. We propose a semiclassical method to study many-body spin dynamics in generic spin lattice models. The method, named DTWA, is based on a novel type of discrete Monte-Carlo sampling in phase-space. We demonstare the power of the technique by comparisons with analytical and numerically exact calculations. It is shown that DTWA captures the dynamics of one- and two-point correlations 1D systems. We also use DTWA to study the dynamics of correlations in 2D systems with many spins and different types of long-range couplings, in regimes where other numerical methods are generally unreliable. Computing spatial and time-dependent correlations, we find a sharp change in the speed of propagation of correlations at a critical range of interactions determined by the system dimension. The investigations are relevant for a broad range of systems including solids, atom-photon systems and ultracold gases of polar molecules, trapped ions, Rydberg, and magnetic atoms. This work has been financially supported by JILA-NSF-PFC-1125844, NSF-PIF-1211914, ARO, AFOSR, AFOSR-MURI.
NASA Astrophysics Data System (ADS)
Moradkhani, Hamid; Dechant, Caleb M.; Sorooshian, Soroosh
2012-12-01
Particle filters (PFs) have become popular for assimilation of a wide range of hydrologic variables in recent years. With this increased use, it has become necessary to increase the applicability of this technique for use in complex hydrologic/land surface models and to make these methods more viable for operational probabilistic prediction. To make the PF a more suitable option in these scenarios, it is necessary to improve the reliability of these techniques. Improved reliability in the PF is achieved in this work through an improved parameter search, with the use of variable variance multipliers and Markov Chain Monte Carlo methods. Application of these methods to the PF allows for greater search of the posterior distribution, leading to more complete characterization of the posterior distribution and reducing risk of sample impoverishment. This leads to a PF that is more efficient and provides more reliable predictions. This study introduces the theory behind the proposed algorithm, with application on a hydrologic model. Results from both real and synthetic studies suggest that the proposed filter significantly increases the effectiveness of the PF, with marginal increase in the computational demand for hydrologic prediction.
Biotic indices have been used ot assess biological condition by dividing index scores into condition categories. Historically the number of categories has been based on professional judgement. Alternatively, statistical methods such as power analysis can be used to determine the ...
Çatli, Serap
2015-01-01
High atomic number and density of dental implants leads to major problems at providing an accurate dose distribution in radiotherapy and contouring tumors and organs caused by the artifact in head and neck tumors. The limits and deficiencies of the algorithms using in the treatment planning systems can lead to large errors in dose calculation, and this may adversely affect the patient's treatment. In the present study, four commercial dental implants were used: pure titanium, titanium alloy (Ti-6Al-4V), amalgam, and crown. The effects of dental implants on dose distribution are determined with two methods: pencil beam convolution (PBC) algorithm and Monte Carlo code for 6 MV photon beam. The central axis depth doses were calculated on the phantom for a source-skin distance (SSD) of 100 cm and a 10 × 10 cm2 field using both of algorithms. The results of Monte Carlo method and Eclipse TPS were compared to each other and to those previously reported. In the present study, dose increases in tissue at a distance of 2 mm in front of the dental implants were seen due to the backscatter of electrons for dental implants at 6 MV using the Monte Carlo method. The Eclipse treatment planning system (TPS) couldn't precisely account for the backscatter radiation caused by the dental prostheses. TPS underestimated the back scatter dose and overestimated the dose after the dental implants. The large errors found for TPS in this study are due to the limits and deficiencies of the algorithms. The accuracy of the PBC algorithm of Eclipse TPS was evaluated in comparison to Monte Carlo calculations in con-sideration of the recommendations of the American Association of Physicists in Medicine Radiation Therapy Committee Task Group 65. From the comparisons of the TPS and Monte Carlo calculations, it is verified that the Monte Carlo simula-tion is a good approach to derive the dose distribution in heterogeneous media. PMID:26699323
Turner, Adam C.; Zhang Di; Kim, Hyun J.; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; Primak, Andrew N.; McCollough, Cynthia H.; McNitt-Gray, Michael F.
2009-06-15
The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called ''equivalent'' source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL{sub 1} and HVL{sub 2}) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL{sub 1} and HVL{sub 2} measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL{sub 1} and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types result in simulations with an average root mean square (RMS) error between the measured and simulated values of approximately 5% across all scanner and bowtie filter combinations, all kVps, both phantom sizes, and both measurement positions, while data provided from the manufacturers gave an average RMS error of approximately 12% pooled across all conditions. While there was no statistically significant difference between the two types of equivalent source models, both of these model types were shown to be statistically significantly different from the source model based on manufacturer's data. These results demonstrate that an equivalent source model based only on measured values can be used in place of manufacturer's data for Monte Carlo simulations for MDCT dosimetry.
NASA Astrophysics Data System (ADS)
Cepeda, Jose; Luna, Byron Quan; Nadim, Farrokh
2013-04-01
An essential component of a quantitative landslide hazard assessment is establishing the extent of the endangered area. This task requires accurate prediction of the run-out behaviour of a landslide, which includes the estimation of the run-out distance, run-out width, velocities, pressures, and depth of the moving mass and the final configuration of the deposits. One approach to run-out modelling is to reproduce accurately the dynamics of the propagation processes. A number of dynamic numerical models are able to compute the movement of the flow over irregular topographic terrains (3-D) controlled by a complex interaction between mechanical properties that may vary in space and time. Given the number of unknown parameters and the fact that most of the rheological parameters cannot be measured in the laboratory or field, the parametrization of run-out models is very difficult in practice. For this reason, the application of run-out models is mostly used for back-analysis of past events and very few studies have attempted to achieve forward predictions. Consequently all models are based on simplified descriptions that attempt to reproduce the general features of the failed mass motion through the use of parameters (mostly controlling shear stresses at the base of the moving mass) which account for aspects not explicitly described or oversimplified. The uncertainties involved in the run-out process have to be approached in a stochastic manner. It is of significant importance to develop methods for quantifying and properly handling the uncertainties in dynamic run-out models, in order to allow a more comprehensive approach to quantitative risk assessment. A method was developed to compute the variation in run-out intensities by using a dynamic run-out model (MassMov2D) and a probabilistic framework based on a Monte Carlo simulation in order to analyze the effect of the uncertainty of input parameters. The probability density functions of the rheological parameters were generated and sampled leading to a large number of run-out scenarios. In the application of the Monte Carlo method, random samples were generated from the input probability distributions that fitted a Gaussian copula distribution. Each set of samples was used as input to model simulation and the resulting outcome was a spatially displayed intensity map. These maps were created with the results of the probability density functions at each point of the flow track and the deposition zone, having as an output a confidence probability map for the various intensity measures. The goal of this methodology is that the results (in terms of intensity characteristics) can be linked directly to vulnerability curves associated to the elements at risk.
Shimkin, Nahum
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary, which we describe next, after a short reminder on Markov chains. 6.1 Markov Chain Basics MCMC applies
NASA Astrophysics Data System (ADS)
Díaz, N. Cornejo; Vargas, M. Jurado
2008-02-01
We present the new improved version of our Monte Carlo program DETEFF for detector efficiency calibration in gamma-ray spectrometry. It can be applied to a wide range of sample geometries commonly used for measurements with coaxial gamma-ray detectors: point, rectangular, disk, cylindrical, and Marinelli sources (the last being newly included in this version). The program is a dedicated code, designed specifically for computation of gamma-ray detector efficiency. Therefore, it is more user-friendly and less time consuming than most multi-purpose programs that are intended for a wide range of applications. The comparison of efficiency values obtained with DETEFF and MCNP4C for a typical HpGe detector and for energies between 40 and 1800 keV for point, cylindrical, and Marinelli geometries gave acceptable results, with relative deviations <2% for most energies. The validity of the program was also tested by comparing the DETEFF-calculated efficiency values with those obtained experimentally using a coaxial HpGe detector for different sources (point, disk, and 250 mL Marinelli beaker) which contain 241Am, 109Cd, 57Co, 139Ce, 85Sr, 137Cs, 88Y, and 60Co. The calculated values were in good agreement with the experimental efficiencies for the three geometries considered, with the relative deviations generally being below 3.0%. These results and those obtained during the application of the previous versions indicate the program's suitability as a tool for the efficiency calibration of coaxial gamma-ray detectors, especially in routine measurements such as environmental monitoring.
A New Monte Carlo Filtering Method for the Diagnosis of Mission-Critical Failures
NASA Technical Reports Server (NTRS)
Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen
2009-01-01
Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the settings most likely to cause a mission-critical failure. This research benchmarks two treatment learning methods against standard optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. It is shown that these treatment learners are both faster than traditional methods and show demonstrably better results.
NASA Astrophysics Data System (ADS)
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-05-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3?HU and from 78 to 9?HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30?s including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use.
NASA Astrophysics Data System (ADS)
Kholodtsova, Maria N.; Loschenov, Victor B.; Daul, Christian; Blondel, Walter
2014-05-01
Determining the optical properties of biological tissues in vivo from spectral intensity measurements performed at their surface is still a challenge. Based on spectroscopic data acquired, the aim is to solve an inverse problem, where the optical parameter values of a forward model are to be estimated through optimization procedure of some cost function. In many cases it is an ill-posed problem because of small numbers of measures, errors on experimental data, nature of a forward model output data, which may be affected by statistical noise in the case of Monte Carlo (MC) simulation or approximated values for short inter-fibre distances (for Diffusion Equation Approximation (DEA)). In case of optical biopsy, spatially resolved diffuse reflectance spectroscopy is one simple technique that uses various excitation-toemission fibre distances to probe tissue in depths. The aim of the present contribution is to study the characteristics of some classically used cost function, optimization methods (Levenberg-Marquardt algorithm) and how it is reaching global minimum when using MC and/or DEA approaches. Several methods of smoothing filters and fitting were tested on the reflectance curves, I(r), gathered from MC simulations. It was obtained that smoothing the initial data with local regression weighted second degree polynomial and then fitting the data with double exponential decay function decreases the probability of the inverse algorithm to converge to local minima close to the initial point of first guess.
NASA Astrophysics Data System (ADS)
Yeh, C. Y.; Lee, C. C.; Chao, T. C.; Lin, M. H.; Lai, P. A.; Liu, F. H.; Tung, C. J.
2014-02-01
This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0-2.3%). The mean difference for the conformity index was 0.01 (range: 0.0-0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting.
Peter, Silvia; Modregger, Peter; Fix, Michael K.; Volken, Werner; Frei, Daniel; Manser, Peter; Stampanoni, Marco
2014-01-01
Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging. PMID:24763652
NASA Astrophysics Data System (ADS)
Wang, Lei; Iazzi, Mauro; Corboz, Philippe; Troyer, Matthias
2015-03-01
Quantum phase transition (QPT) of Dirac fermions is a fascinating topic both in condensed matter and in high energy physics. Besides its immediate connection to fundamental problems like mass generation and exotic phases of matter, it provides a common playground where state of the art numerical simulations can be crosschecked with various effective field theory predictions, thus deepen our understanding of both fields. The universality class of the QPT is fundamentally different from the usual bosonic field theory because of the coupling to the gapless fermionic mode at the critical point. We study lattice models with spinless and multi-flavor Dirac fermions using the newly developed efficient continuous-time projector quantum Monte Carlo method. Besides eliminating the Trotter error, the method also enables us to directly calculate derivative observables in a continuous range of interaction strengths, thus greatly enhancing the resolution of the quantum critical region. Compatible results are also obtained from infinite projected entangled-pair states calculations. We compare these numerical results with predictions of the Gross-Neveu theory and discuss their physical implications.
NASA Astrophysics Data System (ADS)
Bodammer, N. C.; Kaufmann, J.; Kanowski, M.; Tempelmann, C.
2009-02-01
Diffusion tensor tractography (DTT) allows one to explore axonal connectivity patterns in neuronal tissue by linking local predominant diffusion directions determined by diffusion tensor imaging (DTI). The majority of existing tractography approaches use continuous coordinates for calculating single trajectories through the diffusion tensor field. The tractography algorithm we propose is characterized by (1) a trajectory propagation rule that uses voxel centres as vertices and (2) orientation probabilities for the calculated steps in a trajectory that are obtained from the diffusion tensors of either two or three voxels. These voxels include the last voxel of each previous step and one or two candidate successor voxels. The precision and the accuracy of the suggested method are explored with synthetic data. Results clearly favour probabilities based on two consecutive successor voxels. Evidence is also provided that in any voxel-centre-based tractography approach, there is a need for a probability correction that takes into account the geometry of the acquisition grid. Finally, we provide examples in which the proposed fibre-tracking method is applied to the human optical radiation, the cortico-spinal tracts and to connections between Broca's and Wernicke's area to demonstrate the performance of the proposed method on measured data.
Zhai, Peng-Wang; Kattawar, George W; Yang, Ping
2008-03-10
We have developed a powerful 3D Monte Carlo code, as part of the Radiance in a Dynamic Ocean (RaDyO) project, which can compute the complete effective Mueller matrix at any detector position in a completely inhomogeneous turbid medium, in particular, a coupled atmosphere-ocean system. The light source can be either passive or active. If the light source is a beam of light, the effective Mueller matrix can be viewed as the complete impulse response Green matrix for the turbid medium. The impulse response Green matrix gives us an insightful way to see how each region of a turbid medium affects every other region. The present code is validated with the multicomponent approach for a plane-parallel system and the spherical harmonic discrete ordinate method for the 3D scalar radiative transfer system. Furthermore, the impulse response relation for a box-type cloud model is studied. This 3D Monte Carlo code will be used to generate impulse response Green matrices for the atmosphere and ocean, which act as inputs to a hybrid matrix operator-Monte Carlo method. The hybrid matrix operator-Monte Carlo method will be presented in part II of this paper. PMID:18327274
Physics-based Predictive Time Propagation Method for Monte Carlo Coupled Depletion Simulations
Johns, Jesse Merlin
2014-12-18
, ti+1), and recomputes the depletion from ti to ti+1. The resulting EOS atomic density can be computed with a variety of strategies. One such method is to compute the corrected atomic density NC(r, ti+1). The temporal change in neutron flux... is “canceled out” over the time interval by calculating the “final” EOS atomic density as: N(r, ti+1) = 1 2 ( NP (r, ti+1) +NC(r, ti+1) ) (1.18) The methodology can be better visualized in the following pseudo-code: for i := 0 to Nsteps do solve...
MO-E-18C-02: Hands-On Monte Carlo Project Assignment as a Method to Teach Radiation Physics
Pater, P; Vallieres, M; Seuntjens, J
2014-06-15
Purpose: To present a hands-on project on Monte Carlo methods (MC) recently added to the curriculum and to discuss the students' appreciation. Methods: Since 2012, a 1.5 hour lecture dedicated to MC fundamentals follows the detailed presentation of photon and electron interactions. Students also program all sampling steps (interaction length and type, scattering angle, energy deposit) of a MC photon transport code. A handout structured in a step-by-step fashion guides student in conducting consistency checks. For extra points, students can code a fully working MC simulation, that simulates a dose distribution for 50 keV photons. A kerma approximation to dose deposition is assumed. A survey was conducted to which 10 out of the 14 attending students responded. It compared MC knowledge prior to and after the project, questioned the usefulness of radiation physics teaching through MC and surveyed possible project improvements. Results: According to the survey, 76% of students had no or a basic knowledge of MC methods before the class and 65% estimate to have a good to very good understanding of MC methods after attending the class. 80% of students feel that the MC project helped them significantly to understand simulations of dose distributions. On average, students dedicated 12.5 hours to the project and appreciated the balance between hand-holding and questions/implications. Conclusion: A lecture on MC methods with a hands-on MC programming project requiring about 14 hours was added to the graduate study curriculum since 2012. MC methods produce “gold standard” dose distributions and slowly enter routine clinical work and a fundamental understanding of MC methods should be a requirement for future students. Overall, the lecture and project helped students relate crosssections to dose depositions and presented numerical sampling methods behind the simulation of these dose distributions. Research funding from governments of Canada and Quebec. PP acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290)
NASA Astrophysics Data System (ADS)
Mizuno, T.; Kanai, Y.; Kataoka, J.; Kiss, M.; Kurita, K.; Pearce, M.; Tajima, H.; Takahashi, H.; Tanaka, T.; Ueno, M.; Umeki, Y.; Yoshida, H.; Arimoto, M.; Axelsson, M.; Marini Bettolo, C.; Bogaert, G.; Chen, P.; Craig, W.; Fukazawa, Y.; Gunji, S.; Kamae, T.; Katsuta, J.; Kawai, N.; Kishimoto, S.; Klamra, W.; Larsson, S.; Madejski, G.; Ng, J. S. T.; Ryde, F.; Rydström, S.; Takahashi, T.; Thurston, T. S.; Varner, G.
2009-03-01
The energy response of plastic scintillators (Eljen Technology EJ-204) to polarized soft gamma-ray photons below 100 keV has been studied, primarily for the balloon-borne polarimeter, PoGOLite. The response calculation includes quenching effects due to low-energy recoil electrons and the position dependence of the light collection efficiency in a 20 cm long scintillator rod. The broadening of the pulse-height spectrum, presumably caused by light transportation processes inside the scintillator, as well as the generation and multiplication of photoelectrons in the photomultiplier tube, were studied experimentally and have also been taken into account. A Monte Carlo simulation based on the Geant4 toolkit was used to model photon interactions in the scintillators. When using the polarized Compton/Rayleigh scattering processes previously corrected by the authors, scintillator spectra and angular distributions of scattered polarized photons could clearly be reproduced, in agreement with the results obtained at a synchrotron beam test conducted at the KEK Photon Factory. Our simulation successfully reproduces the modulation factor, defined as the ratio of the amplitude to the mean of the distribution of the azimuthal scattering angles, within ˜5% (relative). Although primarily developed for the PoGOLite mission, the method presented here is also relevant for other missions aiming to measure polarization from astronomical objects using plastic scintillator scatterers.
Error propagation in the computation of volumes in 3D city models with the Monte Carlo method
NASA Astrophysics Data System (ADS)
Biljecki, F.; Ledoux, H.; Stoter, J.
2014-11-01
This paper describes the analysis of the propagation of positional uncertainty in 3D city models to the uncertainty in the computation of their volumes. Current work related to error propagation in GIS is limited to 2D data and 2D GIS operations, especially of rasters. In this research we have (1) developed two engines, one that generates random 3D buildings in CityGML in multiple LODs, and one that simulates acquisition errors to the geometry; (2) performed an error propagation analysis on volume computation based on the Monte Carlo method; and (3) worked towards establishing a framework for investigating error propagation in 3D GIS. The results of the experiments show that a comparatively small error in the geometry of a 3D city model may cause significant discrepancies in the computation of its volume. This has consequences for several applications, such as in estimation of energy demand and property taxes. The contribution of this work is twofold: this is the first error propagation analysis in 3D city modelling, and the novel approach and the engines that we have created can be used for analysing most of 3D GIS operations, supporting related research efforts in the future.
Bykov, A V; Priezzhev, A V; Myllylae, Risto A
2011-06-30
Two-dimensional spatial intensity distributions of diffuse scattering of near-infrared laser radiation from a strongly scattering medium, whose optical properties are close to those of skin, are obtained using Monte Carlo simulation. The medium contains a cylindrical inhomogeneity with the optical properties, close to those of blood. It is shown that stronger absorption and scattering of light by blood compared to the surrounding medium leads to the fact that the intensity of radiation diffusely reflected from the surface of the medium under study and registered at its surface has a local minimum directly above the cylindrical inhomogeneity. This specific feature makes the method of spatially-resolved reflectometry potentially applicable for imaging blood vessels and determining their sizes. It is also shown that blurring of the vessel image increases almost linearly with increasing vessel embedment depth. This relation may be used to determine the depth of embedment provided that the optical properties of the scattering media are known. The optimal position of the sources and detectors of radiation, providing the best imaging of the vessel under study, is determined. (biophotonics)
Calculation of Nonlinear Thermoelectric Coefficients of InAs1-xSbx Using Monte Carlo Method
Sadeghian, RB; Bahk, JH; Bian, ZX; Shakouri, A
2011-12-28
It was found that the nonlinear Peltier effect could take place and increase the cooling power density when a lightly doped thermoelectric material is under a large electrical field. This effect is due to the Seebeck coefficient enhancement from an electron distribution far from equilibrium. In the nonequilibrium transport regime, the solution of the Boltzmann transport equation in the relaxation-time approximation ceases to apply. The Monte Carlo method, on the other hand, proves to be a capable tool for simulation of semiconductor devices at small scales as well as thermoelectric effects with local nonequilibrium charge distribution. InAs1-xSb is a favorable thermoelectric material for nonlinear operation owing to its high mobility inherited from the binary compounds InSb and InAs. In this work we report simulation results on the nonlinear Peltier power of InAs1-xSb at low doping levels, at room temperature and at low temperatures. The thermoelectric power factor in nonlinear operation is compared with the maximum value that can be achieved with optimal doping in the linear transport regime.
Mallory, Joel D; Mandelshtam, Vladimir A
2015-01-01
The Diffusion Monte Carlo (DMC) method is applied to the water monomer, dimer, and hexamer, using q-TIP4P/F, one of the most simple, empirical water models with flexible monomers. The bias in the time step ($\\Delta\\tau$) and population size ($N_w$) is investigated. For the binding energies, the bias in $\\Delta\\tau$ cancels nearly completely, while a noticeable bias in $N_w$ still remains. However, for the isotope shift, (e.g, in the dimer binding energies between (H$_2$O)$_2$ and (D$_2$O)$_2$) the systematic errors in $N_w$ do cancel. Consequently, very accurate results for the latter (within $\\sim 0.01$ kcal/mol) are obtained with relatively moderate numerical effort ($N_w\\sim 10^3$). For the water hexamer and its (D$_2$O)$_6$ isotopomer the DMC results as a function of $N_w$ are examined for the cage and prism isomers. For a given isomer, the issue of the walker population leaking out of the corresponding basin of attraction is addressed by using appropriate geometric constraints. The population size bias f...
Kuss, M.; Markel, T.; Kramer, W.
2011-01-01
Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.
Percolation of the site random-cluster model by Monte Carlo method
NASA Astrophysics Data System (ADS)
Wang, Songsong; Zhang, Wanzhou; Ding, Chengxiang
2015-08-01
We propose a site random-cluster model by introducing an additional cluster weight in the partition function of the traditional site percolation. To simulate the model on a square lattice, we combine the color-assignation and the Swendsen-Wang methods to design a highly efficient cluster algorithm with a small critical slowing-down phenomenon. To verify whether or not it is consistent with the bond random-cluster model, we measure several quantities, such as the wrapping probability Re, the percolating cluster density P?, and the magnetic susceptibility per site ?p, as well as two exponents, such as the thermal exponent yt and the fractal dimension yh of the percolating cluster. We find that for different exponents of cluster weight q =1.5 , 2, 2.5 , 3, 3.5 , and 4, the numerical estimation of the exponents yt and yh are consistent with the theoretical values. The universalities of the site random-cluster model and the bond random-cluster model are completely identical. For larger values of q , we find obvious signatures of the first-order percolation transition by the histograms and the hysteresis loops of percolating cluster density and the energy per site. Our results are helpful for the understanding of the percolation of traditional statistical models.
NASA Astrophysics Data System (ADS)
Zapom?l, J.; Stachiv, I.; Ferfecki, P.
2016-01-01
In this paper, a novel procedure of simultaneous measurement of the ultrathin film volumetric density and the Young's modulus utilizing the Monte Carlo probabilistic method combined with the finite-element method (FEM) and the experiments carried out on the suspended micro-/nanomechanical resonator with a deposited thin film under different but controllable axial prestresses is proposed and analyzed. Since the procedure requires detection of only two bending fundamental resonant frequencies of a beam under different axial prestress forces, the impacts of noise and damping on accuracy of the results are minimized and thus it essentially improves its reliability. Then the volumetric mass density and the Young's modulus of thin film are evaluated by means of the FEM based computational simulations and the accuracies of the determined values are estimated utilizing the Monte Carlo probabilistic method which has been incorporated into the computational procedure.
NASA Astrophysics Data System (ADS)
Chen, X.; Rubin, Y.; Baldocchi, D. D.
2005-12-01
Understanding the interactions between soil, plant, and the atmosphere under water-stressed conditions is important for ecosystems where water availability is limited. In such ecosystems, the amount of water transferred from the soil to the atmosphere is controlled not only by weather conditions and vegetation type but also by soil water availability. Although researchers have proposed different approaches to model the impact of soil moisture on plant activities, the parameters involved are difficult to measure. However, using measurements of observed latent heat and carbon fluxes, as well as soil moisture data, Bayesian inversion methods can be employed to estimate the various model parameters. In our study, actual Evapotranspiration (ET) of an ecosystem is approximated by the Priestley-Taylor relationship, with the Priestley-Taylor coefficient modeled as a function of soil moisture content. Soil moisture limitation on root uptake is characterized in a similar manner as the Feddes' model. The inference of Bayesian inversion is processed within the framework of graphical theories. Due to the difficulty of obtaining exact inference, the Markov chain Monte Carlo (MCMC) method is implemented using a free software package, BUGS (Bayesian inference Using Gibbs Sampling). The proposed methodology is applied to a Mediterranean Oak-Savanna FLUXNET site in California, where continuous measurements of actual ET are obtained from eddy-covariance technique and soil moisture contents are monitored by several time domain reflectometry probes located within the footprint of the flux tower. After the implementation of Bayesian inversion, the posterior distributions of all the parameters exhibit enhancement in information compared to the prior distributions. The generated samples based on data in year 2003 are used to predict the actual ET in year 2004 and the prediction uncertainties are assessed in terms of confidence intervals. Our tests also reveal the usefulness of various types of soil moisture data in parameter estimation, which could be used to guide analyses of available data and planning of field data collection activities.
NASA Astrophysics Data System (ADS)
Wen, Xiulan; Xu, Youxiong; Li, Hongsheng; Wang, Fenglin; Sheng, Danghong
2012-09-01
Straightness error is an important parameter in measuring high-precision shafts. New generation geometrical product specification(GPS) requires the measurement uncertainty characterizing the reliability of the results should be given together when the measurement result is given. Nowadays most researches on straightness focus on error calculation and only several research projects evaluate the measurement uncertainty based on "The Guide to the Expression of Uncertainty in Measurement(GUM)". In order to compute spatial straightness error(SSE) accurately and rapidly and overcome the limitations of GUM, a quasi particle swarm optimization(QPSO) is proposed to solve the minimum zone SSE and Monte Carlo Method(MCM) is developed to estimate the measurement uncertainty. The mathematical model of minimum zone SSE is formulated. In QPSO quasi-random sequences are applied to the generation of the initial position and velocity of particles and their velocities are modified by the constriction factor approach. The flow of measurement uncertainty evaluation based on MCM is proposed, where the heart is repeatedly sampling from the probability density function(PDF) for every input quantity and evaluating the model in each case. The minimum zone SSE of a shaft measured on a Coordinate Measuring Machine(CMM) is calculated by QPSO and the measurement uncertainty is evaluated by MCM on the basis of analyzing the uncertainty contributors. The results show that the uncertainty directly influences the product judgment result. Therefore it is scientific and reasonable to consider the influence of the uncertainty in judging whether the parts are accepted or rejected, especially for those located in the uncertainty zone. The proposed method is especially suitable when the PDF of the measurand cannot adequately be approximated by a Gaussian distribution or a scaled and shifted t-distribution and the measurement model is non-linear.
NASA Astrophysics Data System (ADS)
Chung, T.; Rachman, A.; Yoshimoto, K.
2013-12-01
For the separation of intrinsic (Qi-1) and scattering attenuation (Qs-1) in South Korea, the multiple-lapse time windows analysis using the direct simulation Monte Carlo (DSMC) method (Yoshimoto, 2000) showed that the depth-dependent velocity model divided by crust and mantle fit better than the uniform velocity model (Chung et al., 2010). Among the several models of S-wave velocity, the least residuals were observed for the discontinuous Moho model at 32 km with crustal velocity increasing from 3.5 to 3.8 km. Chung and Yoshimoto (2013), however, reported DSMC modeling with 10km source depth to be the smallest residuals corresponding to average focal depth of data set, and showed the effect of source events to be greater than that of Moho model. This study thus collected 330 ray paths originated from 39 events with around 10 km source depth in South Korea (Fig. 1), and analyzed by using DSMC method as the same way of Chung et al (2010). The substantial reduction value by changing source depth indicates an advantage of the DSMC model over the analytic model. As was the previous study, we confirmed that the residual difference of the Moho model is relatively very small compare to the source depth change. Based on this data, we will examine the focal mechanism effect which was previously failed to observe (Chung and Yoshimoto, 2012). References; Chung, T.W., K. Yoshimoto, and S. Yun, 2010, BSSA, 3183- 3193. Chung, T.W., and K. Yoshimoto, 2012, J.M.M.T, 85-91 (in Korean). Chung, T.W., and K. Yoshimoto, 2013, Geosciences J., in submitted. Yoshimoto, K., 2000, JGR, 6153-6161. Fig. 1. Ray paths of this study
NASA Astrophysics Data System (ADS)
Esler, Kenneth Paul
Path integral Monte Carlo (PIMC) is a quantum-level simulation method based on a stochastic sampling of the many-body thermal density matrix. Utilizing the imaginary-time formulation of Feynman's sum-over-histories, it includes thermal fluctuations and particle correlations in a natural way. Over the past two decades, PIMC has been applied to the study of the electron gas, hydrogen under extreme pressure, and superfluid helium with great success. However, the computational demand scales with a high power of the atomic number, preventing its application to systems containing heavier elements. In this dissertation, we present the methodological developments necessary to apply this powerful tool to these systems. We begin by introducing the PIMC method. We then explain how effective potentials with position-dependent electron masses can be used to significantly reduce the computational demand of the method for heavier elements, while retaining high accuracy. We explain how these pseudohamiltonians can be integrated into the PIMC simulation by computing the density matrix for the electron-ion pair. We then address the difficulties associated with the long-range behavior of the coulomb potential, and improve a method to optimally partition particle interactions into real-space and reciprocal-space summations. We discuss the use of twist-averaged boundary conditions to reduce the finite-size effects in our simulations and the fixed-phase method needed to enforce the boundary conditions. Finally, we explain how a PIMC simulation of the electrons can be coupled to a classical Langevin dynamics simulation of the ions to achieve an efficient sampling of all degrees of freedom. After describing these advancements in methodology, we apply our new technology to fluid sodium near its liquid-vapor critical point. In particular, we explore the microscopic mechanisms which drive the continuous change from a dense metallic liquid to an expanded insulating vapor above the critical temperature. We show that the dynamic aggregation and dissociation of clusters of atoms play a significant role in determining the conductivity and that the formation of these clusters is highly density and temperature dependent. Finally, we suggest several avenues for research to further improve our simulations.
Li, Jun; Calo, Victor M.
2013-09-15
We present a single-particle Lennard–Jones (L-J) model for CO{sub 2} and N{sub 2}. Simplified L-J models for other small polyatomic molecules can be obtained following the methodology described herein. The phase-coexistence diagrams of single-component systems computed using the proposed single-particle models for CO{sub 2} and N{sub 2} agree well with experimental data over a wide range of temperatures. These diagrams are computed using the Markov Chain Monte Carlo method based on the Gibbs-NVT ensemble. This good agreement validates the proposed simplified models. That is, with properly selected parameters, the single-particle models have similar accuracy in predicting gas-phase properties as more complex, state-of-the-art molecular models. To further test these single-particle models, three binary mixtures of CH{sub 4}, CO{sub 2} and N{sub 2} are studied using a Gibbs-NPT ensemble. These results are compared against experimental data over a wide range of pressures. The single-particle model has similar accuracy in the gas phase as traditional models although its deviation in the liquid phase is greater. Since the single-particle model reduces the particle number and avoids the time-consuming Ewald summation used to evaluate Coulomb interactions, the proposed model improves the computational efficiency significantly, particularly in the case of high liquid density where the acceptance rate of the particle-swap trial move increases. We compare, at constant temperature and pressure, the Gibbs-NPT and Gibbs-NVT ensembles to analyze their performance differences and results consistency. As theoretically predicted, the agreement between the simulations implies that Gibbs-NVT can be used to validate Gibbs-NPT predictions when experimental data is not available.
NASA Astrophysics Data System (ADS)
Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George
2014-06-01
An electron-photon coupled Monte Carlo code ARCHER -
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-09-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions. PMID:26417866
NASA Astrophysics Data System (ADS)
Gheorghe, Munteanu Bogdan; Alexei, Leahu; Sergiu, Cataranciuc
2013-09-01
We prove the limit theorem for life time distribution connected with reliability systems when their life time is a Pascal Convolution of independent and identically distributed random variables. We show that, in some conditions, such distributions may be approximated by means of Erlang distributions. As a consequnce, survival functions for such systems may be, respectively, approximated by Erlang survival functions. By using Monte Carlo method we experimantally confirm the theoretical results of our theorem.
Wang, L; Fourkal, E; Hayes, S; Jin, L; Ma, C
2014-06-01
Purpose: To study the dosimetric difference resulted in using the pencil beam algorithm instead of Monte Carlo (MC) methods for tumors adjacent to the skull. Methods: We retrospectively calculated the dosimetric differences between RT and MC algorithms for brain tumors treated with CyberKnife located adjacent to the skull for 18 patients (total of 27 tumors). The median tumor sizes was 0.53-cc (range 0.018-cc to 26.2-cc). The absolute mean distance from the tumor to the skull was 2.11 mm (range - 17.0 mm to 9.2 mm). The dosimetric variables examined include the mean, maximum, and minimum doses to the target, the target coverage (TC) and conformality index. The MC calculation used the same MUs as the RT dose calculation without further normalization and 1% statistical uncertainty. The differences were analyzed by tumor size and distance from the skull. Results: The TC was generally reduced with the MC calculation (24 out of 27 cases). The average difference in TC between RT and MC was 3.3% (range 0.0% to 23.5%). When the TC was deemed unacceptable, the plans were re-normalized in order to increase the TC to 99%. This resulted in a 6.9% maximum change in the prescription isodose line. The maximum changes in the mean, maximum, and minimum doses were 5.4 %, 7.7%, and 8.4%, respectively, before re-normalization. When the TC was analyzed with regards to target size, it was found that the worst coverage occurred with the smaller targets (0.018-cc). When the TC was analyzed with regards to the distance to the skull, there was no correlation between proximity to the skull and TC between the RT and MC plans. Conclusions: For smaller targets (< 4.0-cc), MC should be used to re-evaluate the dose coverage after RT is used for the initial dose calculation in order to ensure target coverage.
F. -J. Jiang
2010-09-30
Motivated by the so-called cubical regime in magnon chiral perturbation theory, we propose a new method to calculate the low-energy constant, namely the spin-wave velocity $c$ of spin-1/2 antiferromagnets with $O(N)$ symmetry in a Monte Carlo simulation. Specifically we suggest that $c$ can be determined by $c = L/\\beta$ when the squares of the spatial and temporal winding numbers are tuned to be the same in the Monte Carlo calculations. Here $\\beta$ and $L$ are the inverse temperature and the box size used in the simulations when this condition is met. We verify the validity of this idea by simulating the quantum spin-1/2 XY model. The $c$ obtained by using the squares of winding numbers is given by $c = 1.1348(5)Ja$ which is consistent with the known values of $c$ in the literature. Unlike other conventional approaches, our new idea provides a direct method to measure $c$. Further, by simultaneously fitting our Monte Carlo data of susceptibilities $\\chi_{11}$ and spin susceptibilities $\\chi$ to their theoretical predictions from magnon chiral perturbation theory, we find $c$ is given by $c = 1.1347(2)Ja$ which agrees with the one we obtain by the new method of using the squares of winding numbers. The low-energy constants magnetization density ${\\cal M}$ and spin stiffenss $\\rho$ of quantum spin-1/2 XY model are determined as well and are given by ${\\cal M} = 0.43561(1)/a^2$ and $\\rho = 0.26974(5)J$, respectively. Thanks to the prediction power of magnon chiral perturbation theory which puts a very restricted constraint among the low-energy constants for the model considered here, the accuracy of ${\\cal M}$ we present in this study is much precise than previous Monte Carlo result.
arXiv:cond-mat/9804288v23Aug1998 Monte Carlo Eigenvalue Methods in Quantum Mechanics and
Nightingale, Peter
by David M. Ferguson, J. Ilja Siepmann, and Donald G. Truhlar, series editors I. Prigogine and Stuart A states can be used to reduce the errors of Monte Carlo estimates. Contents I Introduction 2 A Quantum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 VII Closing Comments 38 I. INTRODUCTION Many important problems in computational physics
Asllanaj, Fatmir; Contassot-Vivier, Sylvain; Liemert, André; Kienle, Alwin
2014-01-01
We examine the accuracy of a modified finite volume method compared to analytical and Monte Carlo solutions for solving the radiative transfer equation. The model is used for predicting light propagation within a two-dimensional absorbing and highly forward-scattering medium such as biological tissue subjected to a collimated light beam. Numerical simulations for the spatially resolved reflectance and transmittance are presented considering refractive index mismatch with Fresnel reflection at the interface, homogeneous and two-layered media. Time-dependent as well as steady-state cases are considered. In the steady state, it is found that the modified finite volume method is in good agreement with the other two methods. The relative differences between the solutions are found to decrease with spatial mesh refinement applied for the modified finite volume method obtaining <2.4%. In the time domain, the fourth-order Runge-Kutta method is used for the time semi-discretization of the radiative transfer equation. An agreement among the modified finite volume method, Runge-Kutta method, and Monte Carlo solutions are shown, but with relative differences higher than in the steady state. PMID:24390371
NASA Astrophysics Data System (ADS)
Ródenas, José; Gallardo, Sergio; Ballester, Silvia; Primault, Virginie; Ortiz, Josefina
2007-10-01
A gamma spectrometer including an HP Ge detector is commonly used for environmental radioactivity measurements. The efficiency of the detector should be calibrated for each geometry considered. Simulation of the calibration procedure with a validated computer program is an important auxiliary tool for environmental radioactivity laboratories. The MCNP code based on the Monte Carlo method has been applied to simulate the detection process in order to obtain spectrum peaks and determine the efficiency curve for each modelled geometry. The source used for measurements was a calibration mixed radionuclide gamma reference solution, covering a wide energy range (50-2000 keV). Two measurement geometries - Marinelli beaker and Petri boxes - as well as different materials - water, charcoal, sand - containing the source have been considered. Results obtained from the Monte Carlo model have been compared with experimental measurements in the laboratory in order to validate the model.
Wu, Yunzhao; Tang, Zesheng
2014-01-01
In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface. PMID:24526892
Wong, Un-Hong; Wu, Yunzhao; Wong, Hon-Cheng; Liang, Yanyan; Tang, Zesheng
2014-01-01
In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface. PMID:24526892
NASA Astrophysics Data System (ADS)
Määttänen, Anni; Douspis, Marian
2015-04-01
In the last years several datasets on deposition mode ice nucleation in Martian conditions have showed that the effectiveness of mineral dust as a condensation nucleus decreases with temperature (Iraci et al., 2010; Phebus et al., 2011; Trainer et al., 2009). Previously, nucleation modelling in Martian conditions used only constant values of this so-called contact parameter, provided by the few studies previously published on the topic. The new studies paved the way for possibly more realistic way of predicting ice crystal formation in the Martian environment. However, the caveat of these studies (Iraci et al., 2010; Phebus et al., 2011) was the limited temperature range that inhibits using the provided (linear) equations for the contact parameter temperature dependence in all conditions of cloud formation on Mars. One wide temperature range deposition mode nucleation dataset exists (Trainer et al., 2009), but the used substrate was silicon, which cannot imitate realistically the most abundant ice nucleus on Mars, mineral dust. Nevertheless, this dataset revealed, thanks to measurements spanning from 150 to 240 K, that the behaviour of the contact parameter as a function of temperature was exponential rather than linear as suggested by previous work. We have tried to combine the previous findings to provide realistic and practical formulae for application in nucleation and atmospheric models. We have analysed the three cited datasets using a Monte Carlo Markov Chain (MCMC) method. The used method allows us to test and evaluate different functional forms for the temperature dependence of the contact parameter. We perform a data inversion by finding the best fit to the measured data simultaneously at all points for different functional forms of the temperature dependence of the contact angle m(T). The method uses a full nucleation model (Määttänen et al., 2005; Vehkamäki et al., 2007) to calculate the observables at each data point. We suggest one new and test several m(T) dependencies. Two of these may be used to avoid unphysical behaviour (m > 1) when m(T) is implemented in heterogeneous nucleation and cloud models. However, more measurements are required to fully constrain the m(T) dependencies. We show the importance of large temperature range datasets for constraining the asymptotic behaviour of m(T), and we call for more experiments in a large temperature range with well-defined particle sizes or size distributions, for different IN types and nucleating vapours. This study (Määttänen and Douspis, 2014) provides a new framework for analysing heterogeneous nucleation datasets. The results provide, within limits of available datasets, well-behaving m(T) formulations for nucleation and cloud modelling. Iraci, L. T., et al. (2010). Icarus 210, 985-991. Määttänen, A., et al. (2005). J. Geophys. Res. 110, E02002. Määttänen, A. and Douspis, M. (2014). GeoResJ 3-4 , 46-55. Phebus, B. D., et al. (2011). J. Geophys. Res. 116, 4009. Trainer, M. G., et al. (2009). J. Phys. Chem C 113 , 2036-2040. Vehkamäki, H., et al. (2007). Atmos. Chem. Phys. 7, 309-313.
NASA Astrophysics Data System (ADS)
Macedonia, Michael D.; Maginn, Edward J.
Configurational-bias Monte Carlo sampling techniques have been developed which overcome the difficulties of sampling configuration space efficiently for all-atom molecular models and for branched species represented with united atom models. Implementation details of this sampling scheme are discussed. The accuracy of a united atom forcefield with non-bond parameters optimized for zeolite adsorption and a widely used all-atom forcefield are evaluated by comparison with experimental sorption isotherms of linear and branched hydrocarbons.
NASA Astrophysics Data System (ADS)
Whitmore, Alexander Jason
Concentrating solar power systems are currently the predominant solar power technology for generating electricity at the utility scale. The central receiver system, which is a concentrating solar power system, uses a field of mirrors to concentrate solar radiation onto a receiver where a working fluid is heated to drive a turbine. Current central receiver systems operate on a Rankine cycle, which has a large demand for cooling water. This demand for water presents a challenge for the current central receiver systems as the ideal locations for solar power plants have arid climates. An alternative to the current receiver technology is the small particle receiver. The small particle receiver has the potential to produce working fluid temperatures suitable for use in a Brayton cycle which can be more efficient when pressurized to 0.5 MPa. Using a fused quartz window allows solar energy into the receiver while maintaining a pressurized small particle receiver. In this thesis, a detailed numerical investigation for a spectral, three dimensional, cylindrical glass window for a small particle receiver was performed. The window is 1.7 meters in diameter and 0.0254 meters thick. There are three Monte Carlo Ray Trace codes used within this research. The first MCRT code, MIRVAL, was developed by Sandia National Laboratory and modified by a fellow San Diego State University colleague Murat Mecit. This code produces the solar rays on the exterior surface of the window. The second MCRT code was developed by Steve Ruther and Pablo Del Campo. This code models the small particle receiver, which creates the infrared spectral direction flux on the interior surface of the window used in this work. The third MCRT, developed for this work, is used to model radiation heat transfer within the window itself and is coupled to an energy equation solver to produce a temperature distribution. The MCRT program provides a source term to the energy equation. This in turn, produces a new temperature field for the MCRT program; together the equations are solved iteratively. These iterations repeat until convergence is reached for a steady state temperature field. The energy equation was solved using a finite volume method. The window's thermal conductivity is modeled as a function of temperature. This thermal model is used to investigate the effects of different materials, receiver geometries, interior convection coefficients and exterior convection coefficients. To prevent devitrification and the ultimate failure of the window, the window needs to stay below the devitrification temperature of the material. In addition, the temperature gradients within the window need to be kept to a minimum to prevent thermal stresses. A San Diego State University colleague E-Fann Saung uses these temperature maps to insure that the mounting of the window does not produce thermal stresses which can cause cracking in the brittle fused quartz. The simulations in this thesis show that window temperatures are below the devitrification temperature of the window when there are cooling jets on both surfaces of the window. Natural convection on the exterior window surface was explored and it does not provide adequate cooling; therefore forced convection is required. Due to the low thermal conductivity of the window, the edge mounting thermal boundary condition has little effect on the maximum temperature of the window. The simulations also showed that the solar input flux absorbed less than 1% of the incoming radiation while the window absorbed closer to 20% of the infrared radiation emitted by the receiver. The main source of absorbed power in the window is located directly on the interior surface of the window where the infrared radiation is absorbed. The geometry of the receiver has a large impact on the amount of emitted power which reached the interior surface of the window, and using a conical shaped receiver dramatically reduced the receiver's infrared flux on the window. The importance of internal emission is explored within this research. Internal emission produces a mo
Park, H.; Densmore, J. D.; Wollaber, A. B.; Knoll, D. A.; Rauenzahn, R. M.
2013-07-01
We have developed a moment-based scale-bridging algorithm for thermal radiative transfer problems. The algorithm takes the form of well-known nonlinear-diffusion acceleration which utilizes a low-order (LO) continuum problem to accelerate the solution of a high-order (HO) kinetic problem. The coupled nonlinear equations that form the LO problem are efficiently solved using a preconditioned Jacobian-free Newton-Krylov method. This work demonstrates the applicability of the scale-bridging algorithm with a Monte Carlo HO solver and reports the computational efficiency of the algorithm in comparison to the well-known Fleck-Cummings algorithm. (authors)
NASA Astrophysics Data System (ADS)
Kryuchkov, S. V.; Kukhar', E. I.; Zav'yalov, D. V.
2015-06-01
It has been shown that the linewidth of cyclotron absorption in band-gap graphene is nonzero even in the absence of electron scattering. The functional temperature dependence of the cyclotron absorption linewidth, which is applicable to band-gap graphene in the absence of collisions, has been analytically determined. The power of the elliptically polarized electromagnetic wave absorbed by graphene in the presence of a dc magnetic field has been numerically calculated. The Monte Carlo numerical experiment has confirmed the analytical calculations based on the Boltzmann equation.
Self-consistent electro-thermal simulations of AlGaN/GaN diodes by means of Monte Carlo method
NASA Astrophysics Data System (ADS)
García, S.; Íñiguez-de-la-Torre, I.; García-Pérez, O.; Mateos, J.; González, T.; Sangaré, P.; Gaquière, C.; Pérez, S.
2015-03-01
In this contribution we present the results from the simulation of an AlGaN/GaN heterostructure diode by means of a Monte Carlo tool where thermal effects have been included. Two techniques are investigated: (i) a thermal resistance method (TRM), and (ii) an advanced electro-thermal model (ETM) including the solution of the steady-state heat diffusion equation. Initially, a systematic study at constant temperature is performed in order to calibrate the electronic model. Once this task is performed, the electro-thermal methods are coupled with the Monte Carlo electronic simulations. For the TRM, several values of thermal resistances are employed, and for the ETM method, the dependence on the thermal-conductivity, thickness and die length is analyzed. It is found that the TRM with well-calibrated values of thermal resistances provides a similar behavior to ETM simulations under the hypothesis of constant thermal conductivity. Our results are validated with experimental measurements finding the best agreement when the ETM is used with a temperature-dependent thermal conductivity.
NASA Astrophysics Data System (ADS)
Tsai, Hui-Yu; Lin, Yung-Chieh; Tyan, Yeu-Sheng
2014-11-01
The purpose of this study was to evaluate organ doses for individual patients undergoing interventional transcatheter arterial embolization (TAE) for hepatocellular carcinoma (HCC) using measurement-based Monte Carlo simulation and adaptive organ segmentation. Five patients were enrolled in this study after institutional ethical approval and informed consent. Gafchromic XR-RV3 films were used to measure entrance surface dose to reconstruct the nonuniform fluence distribution field as the input data in the Monte Carlo simulation. XR-RV3 films were used to measure entrance surface doses due to their lower energy dependence compared with that of XR-RV2 films. To calculate organ doses, each patient's three-dimensional dose distribution was incorporated into CT DICOM images with image segmentation using thresholding and k-means clustering. Organ doses for all patients were estimated. Our dose evaluation system not only evaluated entrance surface doses based on measurements, but also evaluated the 3D dose distribution within patients using simulations. When film measurements were unavailable, the peak skin dose (between 0.68 and 0.82 of a fraction of the cumulative dose) can be calculated from the cumulative dose obtained from TAE dose reports. Successful implementation of this dose evaluation system will aid radiologists and technologists in determining the actual dose distributions within patients undergoing TAE.
Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas
2011-01-01
Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%?2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10?×?10 cm2 fields (over 26% passed) and in the bone region for 5?×?5 and 10?×?10 cm2 fields (over 64% passed). With the criterion relaxed to 5%?2 mm, the pass rates were over 90% for both AAA and CCC relative to AXB for all energies and fields, with the exception of AAA 18 MV 2.5?×?2.5 cm2 field, which still did not pass. Conclusions: In heterogeneous media, AXB dose prediction ability appears to be comparable to MC and superior to current clinical convolution methods. The dose differences between AXB and AAA or CCC are mainly in the bone, lung, and interface regions. The spatial distributions of these differences depend on the field sizes and energies. PMID:21776802
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo; Moroni, Saverio
2014-09-21
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin
2014-08-21
A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model of alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.
Chen, X; Xing, L; Luxton, G; Bush, K; Azcona, J
2014-06-01
Purpose: Patient-specific QA for VMAT is incapable of providing full 3D dosimetric information and is labor intensive in the case of severe heterogeneities or small-aperture beams. A cloud-based Monte Carlo dose reconstruction method described here can perform the evaluation in entire 3D space and rapidly reveal the source of discrepancies between measured and planned dose. Methods: This QA technique consists of two integral parts: measurement using a phantom containing array of dosimeters, and a cloud-based voxel Monte Carlo algorithm (cVMC). After a VMAT plan was approved by a physician, a dose verification plan was created and delivered to the phantom using our Varian Trilogy or TrueBeam system. Actual delivery parameters (i.e., dose fraction, gantry angle, and MLC at control points) were extracted from Dynalog or trajectory files. Based on the delivery parameters, the 3D dose distribution in the phantom containing detector were recomputed using Eclipse dose calculation algorithms (AAA and AXB) and cVMC. Comparison and Gamma analysis is then conducted to evaluate the agreement between measured, recomputed, and planned dose distributions. To test the robustness of this method, we examined several representative VMAT treatments. Results: (1) The accuracy of cVMC dose calculation was validated via comparative studies. For cases that succeeded the patient specific QAs using commercial dosimetry systems such as Delta- 4, MAPCheck, and PTW Seven29 array, agreement between cVMC-recomputed, Eclipse-planned and measured doses was obtained with >90% of the points satisfying the 3%-and-3mm gamma index criteria. (2) The cVMC method incorporating Dynalog files was effective to reveal the root causes of the dosimetric discrepancies between Eclipse-planned and measured doses and provide a basis for solutions. Conclusion: The proposed method offers a highly robust and streamlined patient specific QA tool and provides a feasible solution for the rapidly increasing use of VMAT treatments in the clinic.
Domin, D.; Braida, Benoit; Lester Jr., William A.
2008-05-30
This study explores the use of breathing orbital valence bond (BOVB) trial wave functions for diffusion Monte Carlo (DMC). The approach is applied to the computation of the carbon-hydrogen (C-H) bond dissociation energy (BDE) of acetylene. DMC with BOVB trial wave functions yields a C-H BDE of 132.4 {+-} 0.9 kcal/mol, which is in excellent accord with the recommended experimental value of 132.8 {+-} 0.7 kcal/mol. These values are to be compared with DMC results obtained with single determinant trial wave functions, using Hartree-Fock orbitals (137.5 {+-} 0.5 kcal/mol) and local spin density (LDA) Kohn-Sham orbitals (135.6 {+-} 0.5 kcal/mol).
NASA Astrophysics Data System (ADS)
Kai, Takeshi; Yokoya, Akinari; Ukai, Masatoshi; Fujii, Kentaro; Watanabe, Ritsuko
2015-10-01
The thermalization length and spatial distribution of electrons in liquid water were simulated for initial electron energies ranging from 0.1 eV to 100 keV using a dynamic Monte Carlo code. The results showed that electrons were decelerated for thermalization over a longer time period than was previously predicted. This long thermalization time significantly contributed to the series of processes from initial ionization to hydration. We further studied the particular deceleration process of electrons at an incident energy of 1 eV, focusing on the temporal evolution of total track length, mean traveling distance, and energy distributions of decelerating electrons. The initial prehydration time and thermalization periods were estimated to be approximately 50 and 220 fs, respectively, indicating that the initial prehydration began before or contemporaneously with the thermal equilibrium. Based on these results, the prehydrated electrons were suggested to play an important role during multiple DNA damage induction.
NASA Astrophysics Data System (ADS)
Hofierka, Jaroslav; Knutová, Monika
2015-04-01
This paper focuses on the flash flood assessment using a spatially-distributed hydrological model based on the Monte Carlo simulation method. The model is implemented as r.sim.water module in GRASS GIS and was applied to the Malá Svinka Basin in Eastern Slovakia where a heavy rainfall (100 mm/hr.) caused a flash flood event with deadly consequences in July 1998. The event was simulated using standard datasets representing elevation, soils and land cover. The results were captured in time series of water depth maps showing gradual changes in water depths across the basin. The hydrological effects of roads in the study area were simulated using the preferential flow feature of the model. This simulation helped to identify source areas contributing to flooding in built-up areas. The implementation in a GIS environment simplifies the data preparation and eventual modification for various scenarios and flood protection measures. The simulation confirmed excellent robustness and flexibility of the method.
NASA Astrophysics Data System (ADS)
Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.; Malvagi, F.
2014-06-01
Pile-oscillation experiments are performed in the MINERVE reactor at the CEA Cadarache to improve nuclear data accuracy. In order to precisely calculate small reactivity variations (<10 pcm) obtained in these experiments, a reference calculation need to be achieved. This calculation may be accomplished using the continuous-energy Monte Carlo code TRIPOLI-4® by using the eigenvalue difference method. This "direct" method has shown limitations in the evaluation of very small reactivity effects because it needs to reach a very small variance associated to the reactivity in both states. To answer this problem, it has been decided to implement the exact perturbation theory in TRIPOLI-4® and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4® is described. To illustrate the effciency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the "direct" estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the "direct" method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high precision. Other applications of this perturbation method are presented and tested like the calculation of exact kinetic parameters (?eff, ?eff) or sensitivity parameters.
NASA Astrophysics Data System (ADS)
Cortés-Giraldo, M. A.; Carabe, A.
2015-04-01
We compare unrestricted dose average linear energy transfer (LET) maps calculated with three different Monte Carlo scoring methods in voxelized geometries irradiated with proton therapy beams with three different Monte Carlo scoring methods. Simulations were done with the Geant4 (Geometry ANd Tracking) toolkit. The first method corresponds to a step-by-step computation of LET which has been reported previously in the literature. We found that this scoring strategy is influenced by spurious high LET components, which relative contribution in the dose average LET calculations significantly increases as the voxel size becomes smaller. Dose average LET values calculated for primary protons in water with voxel size of 0.2?mm were a factor ~1.8 higher than those obtained with a size of 2.0?mm at the plateau region for a 160?MeV beam. Such high LET components are a consequence of proton steps in which the condensed-history algorithm determines an energy transfer to an electron of the material close to the maximum value, while the step length remains limited due to voxel boundary crossing. Two alternative methods were derived to overcome this problem. The second scores LET along the entire path described by each proton within the voxel. The third followed the same approach of the first method, but the LET was evaluated at each step from stopping power tables according to the proton kinetic energy value. We carried out microdosimetry calculations with the aim of deriving reference dose average LET values from microdosimetric quantities. Significant differences between the methods were reported either with pristine or spread-out Bragg peaks (SOBPs). The first method reported values systematically higher than the other two at depths proximal to SOBP by about 15% for a 5.9?cm wide SOBP and about 30% for a 11.0?cm one. At distal SOBP, the second method gave values about 15% lower than the others. Overall, we found that the third method gave the most consistent performance since it returned stable dose average LET values against simulation parameter changes and gave the best agreement with dose average LET estimations from microdosimetry calculations.
Cortes-Giraldo, M A; Carabe-Fernandez, A
2014-06-01
Purpose: To evaluate the differences in dose-averaged linear energy transfer (LETd) maps calculated in water by means of different strategies found in the literature in proton therapy Monte Carlo simulations and to compare their values with dose-mean lineal energy microdosimetry calculations. Methods: The Geant4 toolkit (version 9.6.2) was used. Dose and LETd maps in water were scored for primary protons with cylindrical voxels defined around the beam axis. Three LETd calculation methods were implemented. First, the LETd values were computed by calculating the unrestricted linear energy transfer (LET) associated to each single step weighted by the energy deposition (including delta-rays) along the step. Second, the LETd was obtained for each voxel by computing the LET along all the steps simulated for each proton track within the voxel, weighted by the energy deposition of those steps. Third, the LETd was scored as the quotient between the second momentum of the LET distribution, calculated per proton track, over the first momentum. These calculations were made with various voxel thicknesses (0.2 – 2.0 mm) for a 160 MeV proton beamlet and spread-out Bragg Peaks (SOBP). The dose-mean lineal energy was calculated in a uniformly-irradiated water sphere, 0.005 mm radius. Results: The value of the LETd changed systematically with the voxel thickness due to delta-ray emission and the enlargement of the LET distribution spread, especially at shallow depths. Differences of up to a factor 1.8 were found at the depth of maximum dose, leading to similar differences at the central and distal depths of the SOBPs. The third LETd calculation method gave better agreement with microdosimetry calculations around the Bragg Peak. Conclusion: Significant differences were found between LETd map Monte Carlo calculations due to both the calculation strategy and the voxel thickness used. This could have a significant impact in radiobiologically-optimized proton therapy treatments.
A. Putze; L. Derome; D. Maurin; L. Perotto; R. Taillet
2009-01-21
Propagation of charged cosmic-rays in the Galaxy depends on the transport parameters, whose number can be large depending on the propagation model under scrutiny. A standard approach for determining these parameters is a manual scan, leading to an inefficient and incomplete coverage of the parameter space. We implement a Markov Chain Monte Carlo (MCMC), which is well suited to multi-parameter determination. Its specificities (burn-in length, acceptance, and correlation length) are discussed in the phenomenologically well-understood Leaky-Box Model. From a technical point of view, a trial function based on binary-space partitioning is found to be extremely efficient, allowing a simultaneous determination of up to nine parameters, including transport and source parameters, such as slope and abundances. Our best-fit model includes both a low energy cut-off and reacceleration, whose values are consistent with those found in diffusion models. A Kolmogorov spectrum for the diffusion slope (delta=1/3) is excluded. The marginalised probability-density function for delta and alpha (the slope of the source spectra) are delta~0.55-0.60 and alpha~2.14-2.17, depending on the dataset used and the number of free parameters in the fit. All source-spectrum parameters (slope and abundances) are positively correlated among themselves and with the reacceleration strength, but are negatively correlated with the other propagation parameters. A forthcoming study will extend our analysis to more physical diffusion models.
NASA Astrophysics Data System (ADS)
Perez, J. A.; Olson, R. E.
1999-06-01
We have developed a classical trajectory Monte Carlo code for use in the study of collisions between highly charged ions and systems with multiple targets, such as surfaces. We have simulated a collision between the bare ions C6+, Kr36+, Ne10+, Ar18+, and Xe54+, and a configuration of approximately 400 individual atoms. The projectile has an initial energy of 0.25 keV/u with the velocity perpendicular to the surface. To simulate a simplified surface, the target atoms are held in a simple cubic lattice arrangement by the use of Morse potentials between target nuclei. Each target nucleus has one electron with a binding energy of 12 eV initially localized about it. Initial conditions of the electrons are restricted to represent the 2p electrons of LiF anions. The forces between all particles are calculated at each step in the simulation and the trajectory of every particle is followed. Results for the critical radius of capture, and the principal numbers are shown. Details of the capture of the first three electrons by Ar18+ as it approaches the surface are given.
NASA Astrophysics Data System (ADS)
De Backer, A.; Adjanor, G.; Domain, C.; Lescoat, M. L.; Jublot-Leclerc, S.; Fortuna, F.; Gentils, A.; Ortiz, C. J.; Souidi, A.; Becquart, C. S.
2015-06-01
Implantation of 10 keV helium in 316L steel thin foils was performed in JANNuS-Orsay facility and modeled using a multiscale approach. Density Functional Theory (DFT) atomistic calculations [1] were used to obtain the properties of He and He-vacancy clusters, and the Binary Collision Approximation based code MARLOWE was applied to determine the damage and He-ion depth profiles as in [2,3]. The processes involved in the homogeneous He bubble nucleation and growth were defined and implemented in the Object Kinetic Monte Carlo code LAKIMOCA [4]. In particular as the He to dpa ratio was high, self-trapping of He clusters and the trap mutation of He-vacancy clusters had to be taken into account. With this multiscale approach, the formation of bubbles was modeled up to nanometer-scale size, where bubbles can be observed by Transmission Electron Microscopy. Their densities and sizes were studied as functions of fluence (up to 5 × 1019 He/m2) at two temperatures (473 and 723 K) and for different sample thicknesses (25-250 nm). It appears that the damage is not only due to the collision cascades but is also strongly controlled by the He accumulation in pressurized bubbles. Comparison with experimental data is discussed and sensible agreement is achieved.
Ganesh, Panchapakesan; Kim, Jeongnim; Park, Changwon; Yoon, Mina; Reboredo, Fernando A; Kent, Paul R
2014-01-01
Highly accurate diffusion quantum Monte Carlo (QMC) studies of the adsorption and diffusion of atomic lithium in AA-stacked graphite are compared with van der Waals-including density functional theory (DFT) calculations. Predicted QMC lattice constants for pure AA graphite agree with experiment. Pure AA-stacked graphite is shown to challenge many van der Waals methods even when they are accurate for conventional AB graphite. Highest overall DFT accuracy, considering pure AA-stacked graphite as well as lithium binding and diffusion, is obtained by the self-consistent van der Waals functional vdW-DF2, although errors in binding energies remain. Empirical approaches based on point charges such as DFT-D are inaccurate unless the local charge transfer is assessed. The results demonstrate that the lithium carbon system requires a simultaneous highly accurate description of both charge transfer and van der Waals interactions, favoring self-consistent approaches.
NASA Astrophysics Data System (ADS)
Groves, Chris; Kimber, Robin G. E.; Walker, Alison B.
2010-10-01
In this letter we evaluate the accuracy of the first reaction method (FRM) as commonly used to reduce the computational complexity of mesoscale Monte Carlo simulations of geminate recombination and the performance of organic photovoltaic devices. A wide range of carrier mobilities, degrees of energetic disorder, and applied electric field are considered. For the ranges of energetic disorder relevant for most polyfluorene, polythiophene, and alkoxy poly(phenylene vinylene) materials used in organic photovoltaics, the geminate separation efficiency predicted by the FRM agrees with the exact model to better than 2%. We additionally comment on the effects of equilibration on low-field geminate separation efficiency, and in doing so emphasize the importance of the energy at which geminate carriers are created upon their subsequent behavior.
NASA Astrophysics Data System (ADS)
Mazoochi, Alireza; Rahmani, Faezeh; Abbasi Davani, Freydoun; Ghaderi, Ruhollah
2014-11-01
One of the methods for material inspection is the dual-energy X-ray technique. Although this method can be more useful in material distinguishing, but signal's intensities are still dependent on the thicknesses of materials in front of the detector, so the material identification results may be affected. In this paper, the new technique using Composite Simpson numerical method has been introduced for eliminating this conflicting effect which stems from material's thickness in the image. This method has been evaluated for some materials such as aluminum and plastic. Calculations have been performed using MCNP4C code to obtain the received X-ray intensity to the detectors. MATLAB software has been also used for the calculations of removing the effect of thickness and optimizing the system performance. Results have shown good performance in identifying materials independent of their thicknesses. The standard deviation of the R parameter, a common parameter for identification, has been improved from 0.613 to 0.0557 for aluminum and from 0.3043 to 0.0288 for plastic, respectively. This method provides an approximation for the X-ray attenuation at two X-ray energies instead of two energy spectra which greatly improves the material identification.
NASA Astrophysics Data System (ADS)
Zhang, Shixun; Yamagia, Shinichi; Yunoki, Seiji
2013-08-01
Models of fermions interacting with classical degrees of freedom are applied to a large variety of systems in condensed matter physics. For this class of models, Weiße [Phys. Rev. Lett. 102, 150604 (2009)] has recently proposed a very efficient numerical method, called O(N) Green-Function-Based Monte Carlo (GFMC) method, where a kernel polynomial expansion technique is used to avoid the full numerical diagonalization of the fermion Hamiltonian matrix of size N, which usually costs O(N3) computational complexity. Motivated by this background, in this paper we apply the GFMC method to the double exchange model in three spatial dimensions. We mainly focus on the implementation of GFMC method using both MPI on a CPU-based cluster and Nvidia's Compute Unified Device Architecture (CUDA) programming techniques on a GPU-based (Graphics Processing Unit based) cluster. The time complexity of the algorithm and the parallel implementation details on the clusters are discussed. We also show the performance scaling for increasing Hamiltonian matrix size and increasing number of nodes, respectively. The performance evaluation indicates that for a 323 Hamiltonian a single GPU shows higher performance equivalent to more than 30 CPU cores parallelized using MPI.
Yang, Y M; Bednarz, B; Zankowski, C; Svatos, M
2014-06-15
Purpose: The advent of on-line/off-line adaptive, and biologically-conformal radiation therapy has led to a need for treatment planning solutions that utilize voxel-specific penalties, requiring optimization over a large solution space that is performed quickly, and the dose in each voxel calculated accurately. This work proposes a “passive” optimization framework, which is executed concurrently during Monte Carlo dose calculation, evaluating the cost/benefit of each history during transport, and creates a passively optimized fluence map. Methods: The Monte Carlo code Geant4 v9.6 was used for this study. The standard voxel geometry implementation was modified to support the passive optimization framework, with voxel-specific optimization parameters. Dose-benefit functions, which will increase a particle history’s weight upon dose deposition, were defined in a central collection of voxels to effectively create target structures. Histories that deposit energy to voxels are reweighted based on a voxel’s dose multiplied by its cost/benefit value. Upon full termination of each history, the dose contributions of that history are reweighted to reflect a contribution proportional to the history’s final weight. A parallel-planar 1.25 MeV photon fluence is transported through the geometry, and re-weighted at each dose deposition step. The resulting weight is tallied with the incident spatial and directional coordinates in a phase-space distribution. Results: A uniform incident fluence was reweighted during MC dose calculations to create an optimized fluence map which would generate dose profiles in target volumes that exhibit the same dose characteristics as the prescribed optimization parameters. An optimized dose profile, calculated concurrently with the phase-space, reflects the resulting dose distribution. Conclusion: This study demonstrated the feasibility of passively optimizing an incident fluence map during Monte Carlo dose calculations. The flexibility of the voxel-specific optimization framework allows a variety and combination of optimization parameters to be calculated for each voxel at every transportation step. This work is partially supported by Varian. This work is partially supported by the NIH.
NASA Astrophysics Data System (ADS)
Brochart, David; Andréassian, Vazken
2015-04-01
Precipitation is known to exhibit a high spatial variability. For this reason, raingage measurements, which only provide a local information about rainfall, may not be appropriate to estimate areal rainfall. On the other hand, catchments have the ability to aggregate rainfall over their area and route it to a unique point - the outlet - where it can be easily measured. A catchment can thus be viewed as a large raingage, with the difference that what is measured at the outlet is a complex transformation of the rainfall. In this communication, we propose to use a model of this transformation (a so-called rainfall-runoff model) and to infer rainfall from an observed streamflow using a Monte Carlo method. We apply the method to 202 catchments in France and compare the inferred rainfall with the areal raingage-based rainfall measurements. We show that the inferred rainfall accuracy directly depends on the accuracy of the rainfall-runoff model. Potential applications of this method include rainfall estimation in poorly gaged areas, correction of uncertain rainfall estimates (e.g. satellite-based rainfall estimates), as well as historical reconstitution of rainfall based on streamflow measurements.
Bernardin, Frederick E
2007-01-01
Understanding the structure of materials, and how this structure affects their properties, is an important step towards the understanding that is necessary in order to apply computational methods to the end of designing ...
Bergstrom, Paul M. (Livermore, CA); Daly, Thomas P. (Livermore, CA); Moses, Edward I. (Livermore, CA); Patterson, Jr., Ralph W. (Livermore, CA); Schach von Wittenau, Alexis E. (Livermore, CA); Garrett, Dewey N. (Livermore, CA); House, Ronald K. (Tracy, CA); Hartmann-Siantar, Christine L. (Livermore, CA); Cox, Lawrence J. (Los Alamos, NM); Fujino, Donald H. (San Leandro, CA)
2000-01-01
A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.
Energy Science and Technology Software Center (ESTSC)
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
Baptista, A M; Martel, P J; Soares, C M
1999-01-01
A new method is presented for simulating the simultaneous binding equilibrium of electrons and protons on protein molecules, which makes it possible to study the full equilibrium thermodynamics of redox and protonation processes, including electron-proton coupling. The simulations using this method reflect directly the pH and electrostatic potential of the environment, thus providing a much closer and realistic connection with experimental parameters than do usual methods. By ignoring the full binding equilibrium, calculations usually overlook the twofold effect that binding fluctuations have on the behavior of redox proteins: first, they affect the energy of the system by creating partially occupied sites; second, they affect its entropy by introducing an additional empty/occupied site disorder (here named occupational entropy). The proposed method is applied to cytochrome c3 of Desulfovibrio vulgaris Hildenborough to study its redox properties and electron-proton coupling (redox-Bohr effect), using a continuum electrostatic method based on the linear Poisson-Boltzmann equation. Unlike previous studies using other methods, the full reduction order of the four hemes at physiological pH is successfully predicted. The sites more strongly involved in the redox-Bohr effect are identified by analysis of their titration curves/surfaces and the shifts of their midpoint redox potentials and pKa values. Site-site couplings are analyzed using statistical correlations, a method much more realistic than the usual analysis based on direct interactions. The site found to be more strongly involved in the redox-Bohr effect is propionate D of heme I, in agreement with previous studies; other likely candidates are His67, the N-terminus, and propionate D of heme IV. Even though the present study is limited to equilibrium conditions, the possible role of binding fluctuations in the concerted transfer of protons and electrons under nonequilibrium conditions is also discussed. The occupational entropy contributions to midpoint redox potentials and pKa values are computed and shown to be significant. PMID:10354425
Present status of vectorized Monte Carlo
Brown, F.B.
1987-01-01
Monte Carlo applications have traditionally been limited by the large amounts of computer time required to produce acceptably small statistical uncertainties, so the immediate benefit of vectorization is an increase in either the number of jobs completed or the number of particles processed per job, typically by one order of magnitude or more. This results directly in improved engineering design analyses, since Monte Carlo methods are used as standards for correcting more approximate methods. The relatively small number of vectorized programs is a consequence of the newness of vectorized Monte Carlo, the difficulties of nonportability, and the very large development effort required to rewrite or restructure Monte Carlo codes for vectorization. Based on the successful efforts to date, it may be concluded that Monte Carlo vectorization will spread to increasing numbers of codes and applications. The possibility of multitasking provides even further motivation for vectorizing Monte Carlo, since the step from vector to multitasked vector is relatively straightforward.
Khosravi, H.; Hashemi, B.; Mahdavi, S. R.; Hejazi, P.
2015-01-01
Background Gel polymers are considered as new dosimeters for determining radiotherapy dose distribution in three dimensions. Objective The ability of a new formulation of MAGIC-f polymer gel was assessed by experimental measurement and Monte Carlo (MC) method for studying the effect of gold nanoparticles (GNPs) in prostate dose distributions under the internal Ir-192 and external 18MV radiotherapy practices. Method A Plexiglas phantom was made representing human pelvis. The GNP shaving 15 nm in diameter and 0.1 mM concentration were synthesized using chemical reduction method. Then, a new formulation of MAGIC-f gel was synthesized. The fabricated gel was poured in the tubes located at the prostate (with and without the GNPs) and bladder locations of the phantom. The phantom was irradiated to an Ir-192 source and 18 MV beam of a Varian linac separately based on common radiotherapy procedures used for prostate cancer. After 24 hours, the irradiated gels were read using a Siemens 1.5 Tesla MRI scanner. The absolute doses at the reference points and isodose curves resulted from the experimental measurement of the gels and MC simulations following the internal and external radiotherapy practices were compared. Results The mean absorbed doses measured with the gel in the presence of the GNPs in prostate were 15% and 8 % higher than the corresponding values without the GNPs under the internal and external radiation therapies, respectively. MC simulations also indicated a dose increase of 14 % and 7 % due to presence of the GNPs, for the same experimental internal and external radiotherapy practices, respectively. Conclusion There was a good agreement between the dose enhancement factors (DEFs) estimated with MC simulations and experiment gel measurements due to the GNPs. The results indicated that the polymer gel dosimetry method as developed and used in this study, can be recommended as a reliable method for investigating the DEF of GNPs in internal and external radiotherapy practices. PMID:25973406
Raftery, Adrian
SOCIOLOGICAL METHODS & RESEARCHLewis, Raftery / FERTILITY DECLINE This article describes of marital fertility decline. Data col- lected during the World Fertility Study in Iran are analyzed using enable the authors to conclude that Iran's fertility decline was pri- marily a period effect
NASA Astrophysics Data System (ADS)
Roé-Vellvé, N.; Pino, F.; Falcon, C.; Cot, A.; Gispert, J. D.; Marin, C.; Pavía, J.; Ros, D.
2014-08-01
SPECT studies with 123I-ioflupane facilitate the diagnosis of Parkinson’s disease (PD). The effect on quantification of image degradations has been extensively evaluated in human studies but their impact on studies of experimental PD models is still unclear. The aim of this work was to assess the effect of compensating for the degrading phenomena on the quantification of small animal SPECT studies using 123I-ioflupane. This assessment enabled us to evaluate the feasibility of quantitatively detecting small pathological changes using different reconstruction methods and levels of compensation for the image degrading phenomena. Monte Carlo simulated studies of a rat phantom were reconstructed and quantified. Compensations for point spread function (PSF), scattering, attenuation and partial volume effect were progressively included in the quantification protocol. A linear relationship was found between calculated and simulated specific uptake ratio (SUR) in all cases. In order to significantly distinguish disease stages, noise-reduction during the reconstruction process was the most relevant factor, followed by PSF compensation. The smallest detectable SUR interval was determined by biological variability rather than by image degradations or coregistration errors. The quantification methods that gave the best results allowed us to distinguish PD stages with SUR values that are as close as 0.5 using groups of six rats to represent each stage.
Persliden, J; Carlsson, G A
1997-01-01
The air gap technique is an old method for scatter rejection. It is still used in lung examinations and may be reconsidered for use in digital radiography. Using magnification techniques, for example in mammography, the air gap thereby introduced simultaneously yields scatter rejection. A Monte Carlo collision density method is exploited to investigate the physical parameters relevant to this technique. Radiation quantities of scattered photons at points behind a water slab both on and laterally displaced from the central axis are calculated and their dependence on field area, slab thickness, air gap length and detector type is derived. Values of 'scatter-to-primary' ratios of the plane energy fluence (the energy imparted to a totally absorbing detector) are given for perpendicularly incident 30, 70 and 130 kV energy spectra, slab thicknesses of 0.05 and 0.2 m (30 kV: 0.05 m), air gaps of length 0.002-1.0 m and field areas from 8 x 10(-5) to 0.3 m2. Contrast degradation factors are derived for both totally absorbing and thin detectors. The influence on the scatter-to-primary ratios of using divergent instead of parallel beams and of neglecting molecular interference in coherent scattering is analysed. PMID:9015816
Matuttis, Hans-Georg; Wang, Xiaoxing
2015-03-10
Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.
Chetty, Indrin J; Rosu, Mihaela; McShan, Daniel L; Fraass, Benedick A; Balter, James M; Ten Haken, Randall K
2004-04-01
We have applied convolution methods to account for some of the effects of respiratory induced motion in clinical treatment planning of the lung. The 3-D displacement of the GTV center-of-mass (COM) as determined from breath-hold exhale and inhale CT scans was used to approximate the breathing induced motion. The time-course of the GTV-COM was estimated using a probability distribution function (PDF) previously derived from diaphragmatic motion [Med. Phys. 26, 715-720 (1990)] but also used by others for treatment planning in the lung [Int. J. Radiat. Oncol., Biol., Phys. 53, 822-834 (2002); Med. Phys. 30, 1086-1095 (2003)]. We have implemented fluence and dose convolution methods within a Monte Carlo based dose calculation system with the intent of comparing these approaches for planning in the lung. All treatment plans in this study have been calculated with Monte Carlo using the breath-hold exhale CT data sets. An analysis of treatment plans for 3 patients showed substantial differences (hot and cold spots consistently greater than +/- 15%) between the motion convolved and static treatment plans. As fluence convolution accounts for the spatial variance of the dose distribution in the presence of tissue inhomogeneities, the doses were approximately 5% greater than those calculated with dose convolution in the vicinity of the lung. DVH differences between the static, fluence and dose convolved distributions for the CTV were relatively small, however, larger differences were observed for the PTV. An investigation of the effect of the breathing PDF asymmetry on the motion convolved dose distributions showed that reducing the asymmetry resulted in increased hot and cold spots in the motion convolved distributions relative to the static cases. In particular, changing from an asymmetric breathing function to one that is symmetric results in an increase in the hot/cold spots of +/- 15% relative to the static plan. This increase is not unexpected considering that the target spends relatively more time at inhale as the asymmetry decreases (note that the treatment plans were generated using the exhale CT scans). PMID:15125011
D.L. Henderson; S. Yoo; M. Kowalok; T.R. Mackie; B.R. Thomadsen
2001-10-30
The goal of this project is to investigate the use of the adjoint method, commonly used in the reactor physics community, for the optimization of radiation therapy patient treatment plans. Two different types of radiation therapy are being examined, interstitial brachytherapy and radiotherapy. In brachytherapy radioactive sources are surgically implanted within the diseased organ such as the prostate to treat the cancerous tissue. With radiotherapy, the x-ray source is usually located at a distance of about 1-metere from the patient and focused on the treatment area. For brachytherapy the optimization phase of the treatment plan consists of determining the optimal placement of the radioactive sources, which delivers the prescribed dose to the disease tissue while simultaneously sparing (reducing) the dose to sensitive tissue and organs. For external beam radiation therapy the optimization phase of the treatment plan consists of determining the optimal direction and intensity of beam, which provides complete coverage of the tumor region with the prescribed dose while simultaneously avoiding sensitive tissue areas. For both therapy methods, the optimal treatment plan is one in which the diseased tissue has been treated with the prescribed dose and dose to the sensitive tissue and organs has been kept to a minimum.
Fang Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo
2012-01-15
Purpose: The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. Methods: A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Results: Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. Conclusions: The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.
Thorn, Graeme J; King, John R
2016-01-01
The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. PMID:26561777
Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; Lin, W. P.; Jing, Y. P.
2014-10-10
Simulating the evolution of the local universe is important for studying galaxies and the intergalactic medium in a way free of cosmic variance. Here we present a method to reconstruct the initial linear density field from an input nonlinear density field, employing the Hamiltonian Markov Chain Monte Carlo (HMC) algorithm combined with Particle-mesh (PM) dynamics. The HMC+PM method is applied to cosmological simulations, and the reconstructed linear density fields are then evolved to the present day with N-body simulations. These constrained simulations accurately reproduce both the amplitudes and phases of the input simulations at various z. Using a PM model with a grid cell size of 0.75 h {sup –1} Mpc and 40 time steps in the HMC can recover more than half of the phase information down to a scale k ? 0.85 h Mpc{sup –1} at high z and to k ? 3.4 h Mpc{sup –1} at z = 0, which represents a significant improvement over similar reconstruction models in the literature, and indicates that our model can reconstruct the formation histories of cosmic structures over a large dynamical range. Adopting PM models with higher spatial and temporal resolutions yields even better reconstructions, suggesting that our method is limited more by the availability of computer resource than by principle. Dynamic models of structure evolution adopted in many earlier investigations can induce non-Gaussianity in the reconstructed linear density field, which in turn can cause large systematic deviations in the predicted halo mass function. Such deviations are greatly reduced or absent in our reconstruction.
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Du, Xining; Ji, Wei; Xu, X. George; Brown, Forrest B.
2014-06-01
For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed.
NASA Astrophysics Data System (ADS)
Ito, Masakazu; Mito, Masaki; Deguchi, Hiroyuki; Takeda, Kazuyoshi
1994-03-01
The measurements of magnetic heat capacity and susceptibility of one-dimensional S=1 antiferromagnet (CH3)4NNi(NO2)3 (TMNIN) have been carried out in order to make comparison with the theoretical results of a quantum Monte Carlo method for the Haldane system. The results for the heat capacity, which show a broad maximum around 10 K, are well reproduced by the theory with the interaction J/k B=-12.0±1.0 K in the temperature range T>0.2\\mid J\\mid S(S+1)/k_B. The low temperature heat capacity exhibits an exponential decay with gap energy ?/k B=5.3±0.2 K, which gives {\\mit?}=0.44\\mid J\\mid , in contrast to the linear dependence on temperature as in the case for half integer spin. The residual magnetic entropy below 0.7 K is estimated to be 0.07% of Nk B ln 3, which denies the possibility of three-dimensional ordering of the spin system at lower temperatures. The observed susceptibility also agrees with the theory with J/k B=-10.9 K and g=2.02 in the whole temperature region, when we take the effect from the finite length of the chains into consideration.
NASA Astrophysics Data System (ADS)
von Allmen, Paul; Lee, Seungwon; Gulkis, Samuel; Hofstadter, Mark; Choukroun, Mathieu; Keihm, Stephen; Janssen, Michael; Encrenaz, Pierre
2014-05-01
The line shape of molecular emission in a cometary coma is determined in part by the distribution of the molecular velocity and rotational level population. It is commonly believed that close to the cometary nucleus, there is a transition region in the coma, called the Knudsen layer, in which the velocity distribution of the gas is non-Maxwellian (W. Huebner and W. Markiewicz, 2000, Icarus, 148, 594-596). Similarly, in regions where molecular collisions are rare the rotational level distribution differs from the Boltzmann distribution and non-local thermal equilibrium effects determine the molecular emission line shape. We have used the Direct Simulation Monte Carlo method to determine the velocity and rotational level distributions for low-lying ground state transitions throughout the coma, and computed the resulting emission line shape for the isotopologues of H2O with observational parameters close to those for the MIRO instrument on the Rosetta spacecraft en route to 67P/Churyumov-Gerasimenko. We will discuss the effects of heliocentric distance, diurnal and seasonal variations, and how observations of non-Maxwellian spectral line shapes in the Knudsen region and elsewhere, can be used to infer physical properties of the nucleus and coma.
Chen, Jinsong; Hubbard, Susan; Rubin, Yoram; Murray, Christopher J.; Roden, Eric E.; Majer, Ernest L.
2004-12-22
The paper demonstrates the use of ground-penetrating radar (GPR) tomographic data for estimating extractable Fe(II) and Fe(III) concentrations using a Markov chain Monte Carlo (MCMC) approach, based on data collected at the DOE South Oyster Bacterial Transport Site in Virginia. Analysis of multidimensional data including physical, geophysical, geochemical, and hydrogeological measurements collected at the site shows that GPR attenuation and lithofacies are most informative for the estimation. A statistical model is developed for integrating the GPR attenuation and lithofacies data. In the model, lithofacies is considered as a spatially correlated random variable and petrophysical models for linking GPR attenuation to geochemical parameters were derived from data at and near boreholes. Extractable Fe(II) and Fe(III) concentrations at each pixel between boreholes are estimated by conditioning to the co-located GPR data and the lithofacies measurements along boreholes through spatial correlation. Cross-validation results show that geophysical data, constrained by lithofacies, provided information about extractable Fe(II) and Fe(III) concentration in a minimally invasive manner and with a resolution unparalleled by other geochemical characterization methods. The developed model is effective and flexible, and should be applicable for estimating other geochemical parameters at other sites.
Jung, Jae Won; Kim, Jong Oh; Yeo, Inhwan Jason; Cho, Young-Bin; Kim, Sun Mo; DiBiase, Steven
2012-12-15
Purpose: Fast and accurate transit portal dosimetry was investigated by developing a density-scaled layer model of electronic portal imaging device (EPID) and applying it to a clinical environment. Methods: The model was developed for fast Monte Carlo dose calculation. The model was validated through comparison with measurements of dose on EPID using first open beams of varying field sizes under a 20-cm-thick flat phantom. After this basic validation, the model was further tested by applying it to transit dosimetry and dose reconstruction that employed our predetermined dose-response-based algorithm developed earlier. The application employed clinical intensity-modulated beams irradiated on a Rando phantom. The clinical beams were obtained through planning on pelvic regions of the Rando phantom simulating prostate and large pelvis intensity modulated radiation therapy. To enhance agreement between calculations and measurements of dose near penumbral regions, convolution conversion of acquired EPID images was alternatively used. In addition, thickness-dependent image-to-dose calibration factors were generated through measurements of image and calculations of dose in EPID through flat phantoms of various thicknesses. The factors were used to convert acquired images in EPID into dose. Results: For open beam measurements, the model showed agreement with measurements in dose difference better than 2% across open fields. For tests with a Rando phantom, the transit dosimetry measurements were compared with forwardly calculated doses in EPID showing gamma pass rates between 90.8% and 98.8% given 4.5 mm distance-to-agreement (DTA) and 3% dose difference (DD) for all individual beams tried in this study. The reconstructed dose in the phantom was compared with forwardly calculated doses showing pass rates between 93.3% and 100% in isocentric perpendicular planes to the beam direction given 3 mm DTA and 3% DD for all beams. On isocentric axial planes, the pass rates varied between 95.8% and 99.9% for all individual beams and they were 98.2% and 99.9% for the composite beams of the small and large pelvis cases, respectively. Three-dimensional gamma pass rates were 99.0% and 96.4% for the small and large pelvis cases, respectively. Conclusions: The layer model of EPID built for Monte Carlo calculations offered fast (less than 1 min) and accurate calculation for transit dosimety and dose reconstruction.
Pacilio, M.; Lanconelli, N.; Lo Meo, S.; Betti, M.; Montani, L.; Torres Aroche, L. A.; Coca Perez, M. A.
2009-05-15
Several updated Monte Carlo (MC) codes are available to perform calculations of voxel S values for radionuclide targeted therapy. The aim of this work is to analyze the differences in the calculations obtained by different MC codes and their impact on absorbed dose evaluations performed by voxel dosimetry. Voxel S values for monoenergetic sources (electrons and photons) and different radionuclides ({sup 90}Y, {sup 131}I, and {sup 188}Re) were calculated. Simulations were performed in soft tissue. Three general-purpose MC codes were employed for simulating radiation transport: MCNP4C, EGSnrc, and GEANT4. The data published by the MIRD Committee in Pamphlet No. 17, obtained with the EGS4 MC code, were also included in the comparisons. The impact of the differences (in terms of voxel S values) among the MC codes was also studied by convolution calculations of the absorbed dose in a volume of interest. For uniform activity distribution of a given radionuclide, dose calculations were performed on spherical and elliptical volumes, varying the mass from 1 to 500 g. For simulations with monochromatic sources, differences for self-irradiation voxel S values were mostly confined within 10% for both photons and electrons, but with electron energy less than 500 keV, the voxel S values referred to the first neighbor voxels showed large differences (up to 130%, with respect to EGSnrc) among the updated MC codes. For radionuclide simulations, noticeable differences arose in voxel S values, especially in the bremsstrahlung tails, or when a high contribution from electrons with energy of less than 500 keV is involved. In particular, for {sup 90}Y the updated codes showed a remarkable divergence in the bremsstrahlung region (up to about 90% in terms of voxel S values) with respect to the EGS4 code. Further, variations were observed up to about 30%, for small source-target voxel distances, when low-energy electrons cover an important part of the emission spectrum of the radionuclide (in our case, for {sup 131}I). For {sup 90}Y and {sup 188}Re, the differences among the various codes have a negligible impact (within few percents) on convolution calculations of the absorbed dose; thus either one of the MC programs is suitable to produce voxel S values for radionuclide targeted therapy dosimetry. However, if a low-energy beta-emitting radionuclide is considered, these differences can affect also dose depositions at small source-target voxel distances, leading to more conspicuous variations (about 9% for {sup 131}I) when calculating the absorbed dose in the volume of interest.
Reboredo, F A; Hood, R Q; Kent, P C
2009-01-06
We develop a formalism and present an algorithm for optimization of the trial wave-function used in fixed-node diffusion quantum Monte Carlo (DMC) methods. The formalism is based on the DMC mixed estimator of the ground state probability density. We take advantage of a basic property of the walker configuration distribution generated in a DMC calculation, to (i) project-out a multi-determinant expansion of the fixed node ground state wave function and (ii) to define a cost function that relates the interacting-ground-state-fixed-node and the non-interacting trial wave functions. We show that (a) locally smoothing out the kink of the fixed-node ground-state wave function at the node generates a new trial wave function with better nodal structure and (b) we argue that the noise in the fixed-node wave function resulting from finite sampling plays a beneficial role, allowing the nodes to adjust towards the ones of the exact many-body ground state in a simulated annealing-like process. Based on these principles, we propose a method to improve both single determinant and multi-determinant expansions of the trial wave function. The method can be generalized to other wave function forms such as pfaffians. We test the method in a model system where benchmark configuration interaction calculations can be performed and most components of the Hamiltonian are evaluated analytically. Comparing the DMC calculations with the exact solutions, we find that the trial wave function is systematically improved. The overlap of the optimized trial wave function and the exact ground state converges to 100% even starting from wave functions orthogonal to the exact ground state. Similarly, the DMC total energy and density converges to the exact solutions for the model. In the optimization process we find an optimal non-interacting nodal potential of density-functional-like form whose existence was predicted in a previous publication [Phys. Rev. B 77 245110 (2008)]. Tests of the method are extended to a model system with a conventional Coulomb interaction where we show we can obtain the exact Kohn-Sham effective potential from the DMC data.
Lujan, Paul Joseph; /UC, Berkeley /LBL, Berkeley
2009-12-01
This thesis presents a measurement of the top quark mass obtained from p{bar p} collisions at {radical}s = 1.96 TeV at the Fermilab Tevatron using the CDF II detector. The measurement uses a matrix element integration method to calculate a t{bar t} likelihood, employing a Quasi-Monte Carlo integration, which enables us to take into account effects due to finite detector angular resolution and quark mass effects. We calculate a t{bar t} likelihood as a 2-D function of the top pole mass m{sub t} and {Delta}{sub JES}, where {Delta}{sub JES} parameterizes the uncertainty in our knowledge of the jet energy scale; it is a shift applied to all jet energies in units of the jet-dependent systematic error. By introducing {Delta}{sub JES} into the likelihood, we can use the information contained in W boson decays to constrain {Delta}{sub JES} and reduce error due to this uncertainty. We use a neural network discriminant to identify events likely to be background, and apply a cut on the peak value of individual event likelihoods to reduce the effect of badly reconstructed events. This measurement uses a total of 4.3 fb{sup -1} of integrated luminosity, requiring events with a lepton, large E{sub T}, and exactly four high-energy jets in the pseudorapidity range |{eta}| < 2.0, of which at least one must be tagged as coming from a b quark. In total, we observe 738 events before and 630 events after applying the likelihood cut, and measure m{sub t} = 172.6 {+-} 0.9 (stat.) {+-} 0.7 (JES) {+-} 1.1 (syst.) GeV/c{sup 2}, or m{sub t} = 172.6 {+-} 1.6 (tot.) GeV/c{sup 2}.
Tikhonov, D B; Zhorov, B S
1998-01-01
A model of the nicotinic acetylcholine receptor ion channel was elaborated based on the data from electron microscopy, affinity labeling, cysteine scanning, mutagenesis studies, and channel blockade. A restrained Monte Carlo minimization method was used for the calculations. Five identical M2 segments (the sequence EKMTLSISVL10LALTVFLLVI20V) were arranged in five-helix bundles with various geometrical profiles of the pore. For each bundle, energy profiles for chlorpromazine, QX-222, pentamethonium, and other blocking drugs pulled through the pore were calculated. An optimal model obtained allows all of the blockers free access to the pore, but retards them at the rings of residues known to contribute to the corresponding binding sites. In this model, M2 helices are necessarily kinked. They come into contact with each other at the cytoplasmic end but diverge at the synaptic end, where N-termini of M1 segments may contribute to the pore. The kinks disengage alpha-helical H-bonds between Ala12 and Ser8. The uncoupled lone electron pairs of Ser8 carbonyl oxygens protrude into the pore, forming a hydrophilic ring that may be important for the permeation of cations. A split network of H-bonds provides a flexibility to the chains Val9-Ala12, the numerous conformations of which form only two or three intrasegment H-bonds. The cross-ectional dimensions of the interface between the flexible chains vary essentially at the level of Leu11. We suggest that conformational transitions in the chains Val9-Ala12 are responsible for the channel gating, whereas rotations of more stable alpha-helical parts of M2 segments may be necessary to transfer the channel in the desensitized state. PMID:9449326
Markov Chain Monte Carlo and Related Topics Department of Statistics
Liu, Jun
Markov Chain Monte Carlo and Related Topics Jun S. Liu Department of Statistics Stanford University review of recent developments in Markov chain Monte Carlo methodology. The methods discussed include methodology, especially Markov chain Monte Carlo (MCMC), provides an enormous scope for realistic statistical
Romano, Paul K. (Paul Kollath)
2013-01-01
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there ...
Compressible generalized hybrid Monte Carlo.
Fang, Youhan; Sanz-Serna, J M; Skeel, Robert D
2014-05-01
One of the most demanding calculations is to generate random samples from a specified probability distribution (usually with an unknown normalizing prefactor) in a high-dimensional configuration space. One often has to resort to using a Markov chain Monte Carlo method, which converges only in the limit to the prescribed distribution. Such methods typically inch through configuration space step by step, with acceptance of a step based on a Metropolis(-Hastings) criterion. An acceptance rate of 100% is possible in principle by embedding configuration space in a higher dimensional phase space and using ordinary differential equations. In practice, numerical integrators must be used, lowering the acceptance rate. This is the essence of hybrid Monte Carlo methods. Presented is a general framework for constructing such methods under relaxed conditions: the only geometric property needed is (weakened) reversibility; volume preservation is not needed. The possibilities are illustrated by deriving a couple of explicit hybrid Monte Carlo methods, one based on barrier-lowering variable-metric dynamics and another based on isokinetic dynamics. PMID:24811626
Monte Carlo techniques in statistical physics
NASA Astrophysics Data System (ADS)
Murthy, K. P. N.
2006-11-01
In this paper we shall briefly review a few Markov Chain Monte Carlo methods for simulating closed systems described by canonical ensembles. We cover both Boltzmann and non-Boltzmann sampling techniques. The Metropolis algorithm is a typical example of Boltzmann Monte Carlo method. We discuss the time-symmetry of the Markov chain generated by Metropolis like algo- rithms that obey detailed balance. The non-Boltzmann Monte Carlo techniques reviewed include the multicanonical and Wang-Landau sampling. We list what we consider as milestones in the historical development of Monte Carlo methods in statistical physics. We dedicate this article to Prof. Dr. G. Ananthakrishna and wish him the very best in the coming years
NASA Astrophysics Data System (ADS)
Gereben, Orsolya; Pusztai, László
2011-08-01
The invariant environment refinement technique, as applied to reverse Monte Carlo modelling [invariant environment refinement technique + reverse Monte Carlo (INVERT + RMC); M. J. Cliffe, M. T. Dove, D. A. Drabold, and A. L. Goodwin, Phys. Rev. Lett. 104, 125501 (2010), 10.1103/PhysRevLett.104.125501], is extended so that it is now applicable for interpreting the structure factor (instead of the pair distribution function). The new algorithm, called the local invariance calculation, is presented by the examples of amorphous silicon, phosphorus, and liquid argon. As a measure of the effectiveness of the new algorithm, the ratio of exactly fourfold coordinated Si atoms was larger than obtained previously by the INVERT-RMC scheme.
Importance iteration in MORSE Monte Carlo calculations
Kloosterman, J.L.; Hoogenboom, J.E. . Interfaculty Reactor Institute)
1994-05-01
an expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation.
Comparative Monte Carlo efficiency by Monte Carlo analysis.
Rubenstein, B M; Gubernatis, J E; Doll, J D
2010-09-01
We propose a modified power method for computing the subdominant eigenvalue ?{2} of a matrix or continuous operator. While useful both deterministically and stochastically, we focus on defining simple Monte Carlo methods for its application. The methods presented use random walkers of mixed signs to represent the subdominant eigenfunction. Accordingly, the methods must cancel these signs properly in order to sample this eigenfunction faithfully. We present a simple procedure to solve this sign problem and then test our Monte Carlo methods by computing ?{2} of various Markov chain transition matrices. As |?{2}| of this matrix controls the rate at which Monte Carlo sampling relaxes to a stationary condition, its computation also enabled us to compare efficiencies of several Monte Carlo algorithms as applied to two quite different types of problems. We first computed ?{2} for several one- and two-dimensional Ising models, which have a discrete phase space, and compared the relative efficiencies of the Metropolis and heat-bath algorithms as functions of temperature and applied magnetic field. Next, we computed ?{2} for a model of an interacting gas trapped by a harmonic potential, which has a mutidimensional continuous phase space, and studied the efficiency of the Metropolis algorithm as a function of temperature and the maximum allowable step size ?. Based on the ?{2} criterion, we found for the Ising models that small lattices appear to give an adequate picture of comparative efficiency and that the heat-bath algorithm is more efficient than the Metropolis algorithm only at low temperatures where both algorithms are inefficient. For the harmonic trap problem, we found that the traditional rule of thumb of adjusting ? so that the Metropolis acceptance rate is around 50% is often suboptimal. In general, as a function of temperature or ? , ?{2} for this model displayed trends defining optimal efficiency that the acceptance ratio does not. The cases studied also suggested that Monte Carlo simulations for a continuum model are likely more efficient than those for a discretized version of the model. PMID:21230207
Wormhole Hamiltonian Monte Carlo
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2015-01-01
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551
Shell model the Monte Carlo way
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
Discrete Diffusion Monte Carlo for Electron Thermal Transport
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory
2014-10-01
The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
Geodesic Monte Carlo on Embedded Manifolds
Byrne, Simon; Girolami, Mark
2013-01-01
Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024
Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.
ERIC Educational Resources Information Center
O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.
2002-01-01
Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool…
Monte Carlo solution of antiferromagnetic quantum Heisenberg spin systems
Lee, D.H.; Joannopoulos, J.D.; Negele, J.W.
1984-08-01
A Monte Carlo method is introduced that overcomes the problem of alternating signs in Handscomb's method of simulating antiferromagnetic quantum Heisenberg systems. The scheme is applied to both bipartite and frustrated lattices. Results of internal energy, specific heat, and uniform and staggered susceptibilities are presented suggesting that quantum antiferromagnets may now be studied as extensively as classical spin systems using conventional Monte Carlo techniques.
Improving the safety of a body composition analyser based on the PGNAA method.
Miri-Hakimabad, Hashem; Izadi-Najafabadi, Reza; Vejdani-Noghreiyan, Alireza; Panjeh, Hamed
2007-12-01
The 252Cf radioisotope and 241Am-Be are intense neutron emitters that are readily encapsulated in compact, portable and sealed sources. Some features such as high flux of neutron emission and reliable neutron spectrum of these sources make them suitable for the prompt gamma neutron activation analysis (PGNAA) method. The PGNAA method can be used in medicine for neutron radiography and body chemical composition analysis. 252Cf and 241Am-Be sources generate not only neutrons but also are intense gamma emitters. Furthermore, the sample in medical treatments is a human body, so it may be exposed to the bombardments of these gamma-rays. Moreover, accumulations of these high-rate gamma-rays in the detector volume cause simultaneous pulses that can be piled up and distort the spectra in the region of interest (ROI). In order to remove these disadvantages in a practical way without being concerned about losing the thermal neutron flux, a gamma-ray filter made of Pb must be employed. The paper suggests a relatively safe body chemical composition analyser (BCCA) machine that uses a spherical Pb shield, enclosing the neutron source. Gamma-ray shielding effects and the optimum radius of the spherical Pb shield have been investigated, using the MCNP-4C code, and compared with the unfiltered case, the bare source. Finally, experimental results demonstrate that an optimised gamma-ray shield for the neutron source in a BCCA can reduce effectively the risk of exposure to the 252Cf and 241Am-Be sources. PMID:18268376
A quasi-Monte Carlo Metropolis algorithm
Owen, Art B.; Tribble, Seth D.
2005-01-01
This work presents a version of the Metropolis–Hastings algorithm using quasi-Monte Carlo inputs. We prove that the method yields consistent estimates in some problems with finite state spaces and completely uniformly distributed inputs. In some numerical examples, the proposed method is much more accurate than ordinary Metropolis–Hastings sampling. PMID:15956207
Moon, Hyun Ho; Lee, Jong Joo; Choi, Sang Yule; Cha, Jae Sang; Kang, Jang Mook; Kim, Jong Tae; Shin, Myong Chul
2011-01-01
Recently there have been many studies of power systems with a focus on "New and Renewable Energy" as part of "New Growth Engine Industry" promoted by the Korean government. "New And Renewable Energy"-especially focused on wind energy, solar energy and fuel cells that will replace conventional fossil fuels-is a part of the Power-IT Sector which is the basis of the SmartGrid. A SmartGrid is a form of highly-efficient intelligent electricity network that allows interactivity (two-way communications) between suppliers and consumers by utilizing information technology in electricity production, transmission, distribution and consumption. The New and Renewable Energy Program has been driven with a goal to develop and spread through intensive studies, by public or private institutions, new and renewable energy which, unlike conventional systems, have been operated through connections with various kinds of distributed power generation systems. Considerable research on smart grids has been pursued in the United States and Europe. In the United States, a variety of research activities on the smart power grid have been conducted within EPRI's IntelliGrid research program. The European Union (EU), which represents Europe's Smart Grid policy, has focused on an expansion of distributed generation (decentralized generation) and power trade between countries with improved environmental protection. Thus, there is current emphasis on a need for studies that assesses the economic efficiency of such distributed generation systems. In this paper, based on the cost of distributed power generation capacity, calculations of the best profits obtainable were made by a Monte Carlo simulation. Monte Carlo simulations that rely on repeated random sampling to compute their results take into account the cost of electricity production, daily loads and the cost of sales and generate a result faster than mathematical computations. In addition, we have suggested the optimal design, which considers the distribution loss associated with power distribution systems focus on sensing aspect and distributed power generation. PMID:22164047
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Densmore, Jeffrey D; Kelly, Thompson G; Urbatish, Todd J
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
Guan, Fada 1982-
2012-04-27
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the ...
Single scatter electron Monte Carlo
Svatos, M.M.
1997-03-01
A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.
NASA Astrophysics Data System (ADS)
Firmani, G.; Matta, J.
2012-04-01
The expansion of mining in the Pilbara region of Western Australia is resulting in the need to develop better water strategies to make below water table resources accessible, manage surplus water and deal with water demands for processing ore and construction. In all these instances, understanding the local and regional hydrogeology is fundamental to allow sustainable mining; minimising the impacts to the environment. An understanding of the uncertainties of the hydrogeology is necessary to quantify the risks and make objective decisions rather than relying on subjective judgements. The aim of this paper is to review some of the methods proposed by the published literature and find approaches that can be practically implemented in an attempt to estimate model uncertainties. In particular, this paper adopts two general probabilistic approaches that address the parametric uncertainty estimation and its propagation in predictive scenarios: the first order analysis and Monte Carlo simulations. A case example application of the two techniques is also presented for the dewatering strategy of a large below water table open cut iron ore mine in the Pilbara region of Western Australia. This study demonstrates the weakness of the deterministic approach, as the coefficients of variation of some model parameters were greater than 1.0; and suggests a review of the model calibration method and conceptualisation. The uncertainty propagation into predictive scenarios was calculated assuming the parameters with a coefficient of variation higher than 0.25 as deterministic, due to computational difficulties to achieve an accurate result with the Monte Carlo method. The conclusion of this case study was that the first order analysis appears to be a successful and simple tool when the coefficients of variation of calibrated parameters are less than 0.25.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Ramaswami, S.; Snipes, J. S.; Avuthu, V.; Galgalikar, R.; Zhang, Z.
2015-09-01
A thermo-mechanical finite element analysis of the friction stir welding (FSW) process is carried out and the evolution of the material state (e.g., temperature, the extent of plastic deformation, etc.) monitored. Subsequently, the finite-element results are used as input to a Monte-Carlo simulation algorithm in order to predict the evolution of the grain microstructure within different weld zones, during the FSW process and the subsequent cooling of the material within the weld to room temperature. To help delineate different weld zones, (a) temperature and deformation fields during the welding process, and during the subsequent cooling, are monitored; and (b) competition between the grain growth (driven by the reduction in the total grain-boundary surface area) and dynamic-recrystallization grain refinement (driven by the replacement of highly deformed material with an effectively "dislocation-free" material) is simulated. The results obtained clearly revealed that different weld zones form as a result of different outcomes of the competition between the grain growth and grain refinement processes.
Tang Haibin; Cheng Jiao; Liu Chang; York, Thomas M.
2012-07-15
A two-dimensional axisymmetric electromagnetic particle-in-cell code with Monte Carlo collision conditions has been developed for an applied-field magnetoplasmadynamic thruster simulation. This theoretical approach establishes a particle acceleration model to investigate the microscopic and macroscopic characteristics of particles. This new simulation code was used to study the physical processes associated with applied magnetic fields. In this paper (I), detail of the computation procedure and results of predictions of local plasma and field properties are presented. The numerical model was applied to the configuration of a NASA Lewis Research Center 100-kW magnetoplasmadynamic thruster which has well documented experimental results. The applied magnetic field strength was varied from 0 to 0.12 T, and the effects on thrust were calculated as a basis for verification of the theoretical approach. With this confirmation, the changes in the distributions of ion density, velocity, and temperature throughout the acceleration region related to the applied magnetic fields were investigated. Using these results, the effects of applied field on physical processes in the thruster discharge region could be represented in detail, and those results are reported.
Case, J.B.; Buesch, D.C.
2004-01-01
Predictions of waste canister and repository driftwall temperatures as functions of space and time are important to evaluate pre-closure performance of the proposed repository for spent nuclear fuel and high-level radioactive waste at Yucca Mountain, Nevada. Variations in the lithostratigraphic features in densely welded and crystallized rocks of the 12.8-million-year-old Topopah Spring Tuff, especially the porosity resulting from lithophysal cavities, affect thermal properties. A simulated emplacement drift is based on projecting lithophysal cavity porosity values 50 to 800 m from the Enhanced Characterization of the Repository Block cross drift. Lithophysal cavity porosity varies from 0.00 to 0.05 cm3/cm3 in the middle nonlithophysal zone and from 0.03 to 0.28 cm3/cm3 in the lower lithophysal zone. A ventilation model and computer program titled "Monte Carlo Simulation of Ventilation" (MCSIMVENT), which is based on a composite thermal-pulse calculation, simulates statistical variability and uncertainty of rock-mass thermal properties and ventilation performance along a simulated emplacement drift for a pre-closure period of 50 years. Although ventilation efficiency is relatively insensitive to thermal properties, variations in lithophysal porosity along the drift can result in a range of peak driftwall temperatures can range from 40 to 85??C for the preclosure period. Copyright ?? 2004 by ASME.
Kinetic Monte Carlo approach to modeling dislocation mobility
Cai, Wei
Kinetic Monte Carlo approach to modeling dislocation mobility Wei Cai a , Vasily V. Bulatov b , Jo Carlo (kMC) approach to modeling dislocation motion, directly linking the energetics of dislocation kink of planar glide of screw dislocation in Si, an ideal test-bed for our method is first discussed, followed
Tanaka, Kenichi; Takada, Jun
2013-07-01
For the in-situ measurement of (90)Sr contamination, the collimators to be combined with the ?-ray survey meter were designed with MCNP-4C calculations. The designed collimators were manufactured and its characteristics were measured in accordance with the calculations. The calculated energy deposition in the survey meter agreed within 27% to its counting rate measured in terms of their dependencies on the collimator dimension. This supports the usability of the manufactured collimators and numerical correction of the sensitivity of the survey meter depending on the geometry. PMID:23474399
Bednarz, Bryan; Xu, X. George
2008-07-15
A Monte Carlo-based procedure to assess fetal doses from 6-MV external photon beam radiation treatments has been developed to improve upon existing techniques that are based on AAPM Task Group Report 36 published in 1995 [M. Stovall et al., Med. Phys. 22, 63-82 (1995)]. Anatomically realistic models of the pregnant patient representing 3-, 6-, and 9-month gestational stages were implemented into the MCNPX code together with a detailed accelerator model that is capable of simulating scattered and leakage radiation from the accelerator head. Absorbed doses to the fetus were calculated for six different treatment plans for sites above the fetus and one treatment plan for fibrosarcoma in the knee. For treatment plans above the fetus, the fetal doses tended to increase with increasing stage of gestation. This was due to the decrease in distance between the fetal body and field edge with increasing stage of gestation. For the treatment field below the fetus, the absorbed doses tended to decrease with increasing gestational stage of the pregnant patient, due to the increasing size of the fetus and relative constant distance between the field edge and fetal body for each stage. The absorbed doses to the fetus for all treatment plans ranged from a maximum of 30.9 cGy to the 9-month fetus to 1.53 cGy to the 3-month fetus. The study demonstrates the feasibility to accurately determine the absorbed organ doses in the mother and fetus as part of the treatment planning and eventually in risk management.
Simulated Annealing using Hybrid Monte Carlo
R. Salazar; R. Toral
1997-07-31
We propose a variant of the Simulated Annealing method for optimization in the multivariate analysis of differentiable functions. The method uses global actualizations via the Hybrid Monte Carlo algorithm in their generalized version for the proposal of new configurations. We show how this choice can improve upon the performance of simulated annealing methods (mainly when the number of variables is large) by allowing a more effective searching scheme and a faster annealing schedule.
Quantum algorithm for exact Monte Carlo sampling
Nicolas Destainville; Bertrand Georgeot; Olivier Giraud
2010-06-23
We build a quantum algorithm which uses the Grover quantum search procedure in order to sample the exact equilibrium distribution of a wide range of classical statistical mechanics systems. The algorithm is based on recently developed exact Monte Carlo sampling methods, and yields a polynomial gain compared to classical procedures.
Monte Carlo evaluation of thermal desorption rates
Adams, J.E.; Doll, J.D.
1981-05-01
The recently reported method for computing thermal desorption rates via a Monte Carlo evaluation of the appropriate transition state theory expression (J. E. Adams and J. D. Doll, J. Chem. Phys. 74, 1467 (1980)) is extended, by the use of importance sampling, so as to generate the complete temperature dependence in a single calculation. We also describe a straightforward means of calculating the activation energy for the desorption process within the same Monte Carlo framework. The result obtained in this way represents, for the case of a simple desorptive event, an upper bound to the true value.
Charpentier, Ronald R.; Klett, T.R.
2007-01-01
The U.S. Geological Survey has developed two Monte Carlo programs for assessment of undiscovered conventional oil and gas resources. EMCEE (for Energy Monte Carlo) and Emc2 (for Energy Monte Carlo program 2) are programs that calculate probabilistic estimates of undiscovered resources based on input distributions for numbers and sizes of undiscovered fields. Emc2 uses specific types of distributions for the input, whereas EMCEE allows greater flexibility of the input distribution types.
Altman, Michael B; Jin, Jian-Yue; Kim, Sangroh; Wen, Ning; Liu, Dezhi; Siddiqui, M Salim; Ajlouni, Munther I; Movsas, Benjamin; Chetty, Indrin J
2012-01-01
Current commercially available planning systems with Monte Carlo (MC)-based final dose calculation in IMRT planning employ pencil-beam (PB) algorithms in the optimization process. Consequently, dose coverage for SBRT lung plans can feature cold-spots at the interface between lung and tumor tissue. For lung wall (LW)-seated tumors, there can also be hot spots within nearby normal organs (example: ribs). This study evaluated two different practical approaches to limiting cold spots within the target and reducing high doses to surrounding normal organs in MC-based IMRT planning of LW-seated tumors. First, "iterative reoptimization", where the MC calculation (with PB-based optimization) is initially performed. The resultant cold spot is then contoured and used as a simultaneous boost volume. The MC-based dose is then recomputed. The second technique uses noncoplanar beam angles with limited path through lung tissue. Both techniques were evaluated against a conventional coplanar beam approach with a single MC calculation. In all techniques the prescription dose was normalized to cover 95% of the PTV. Fifteen SBRT lung cases with LW-seated tumors were planned. The results from iterative reoptimization showed that conformity index (CI) and/or PTV dose uniformity (UPTV) improved in 12/15 plans. Average improvement was 13%, and 24%, respectively. Nonimproved plans had PTVs near the skin, trachea, and/or very small lung involvement. The maximum dose to 1cc volume (D1cc) of surrounding OARs decreased in 14/15 plans (average 10%). Using noncoplanar beams showed an average improvement of 7% in 10/15 cases and 11% in 5/15 cases for CI and UPTV, respectively. The D1cc was reduced by an average of 6% in 10/15 cases to surrounding OARs. Choice of treatment planning technique did not statistically significantly change lung V5. The results showed that the proposed practical approaches enhance dose conformity in MC-based IMRT planning of lung tumors treated with SBRT, improving target dose coverage and potentially reducing toxicities to surrounding normal organs. PMID:23149794
Monte Carlo simulations of TL and OSL in nanodosimetric materials and feldspars
Chen, Reuven
history: Available online 19 December 2014 Keywords: Thermoluminescence LM-OSL Monte Carlo simulations of carrying out Monte Carlo simulations for thermoluminescence (TL) and optically stimulated luminescence (OSL Carlo methods for the study of thermoluminescence (TL) were presented in the papers by Mandowski (2001
Recent advances and future prospects for Monte Carlo
Brown, Forrest B
2010-01-01
The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.
Applications of Maxent to quantum Monte Carlo
Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)
1990-01-01
We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.
Variance Reduction Techniques for Implicit Monte Carlo Simulations
Landman, Jacob Taylor
2013-09-19
The Implicit Monte Carlo (IMC) method is widely used for simulating thermal radiative transfer and solving the radiation transport equation. During an IMC run a grid network is constructed and particles are sourced into the problem to simulate...
Bayesian inverse problems with Monte Carlo forward models
Bal, Guillaume
The full application of Bayesian inference to inverse problems requires exploration of a posterior distribution that typically does not possess a standard form. In this context, Markov chain Monte Carlo (MCMC) methods are ...
Parallel Fission Bank Algorithms in Monte Carlo Criticality Calculations
Romano, Paul Kollath
In this work we describe a new method for parallelizing the source iterations in a Monte Carlo criticality calculation. Instead of having one global fission bank that needs to be synchronized, as is traditionally done, our ...
OBJECT KINETIC MONTE CARLO SIMULATIONS OF CASCADE ANNEALING IN TUNGSTEN
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.
2014-03-31
The objective of this work is to study the annealing of primary cascade damage created by primary knock-on atoms (PKAs) of various energies, at various temperatures in bulk tungsten using the object kinetic Monte Carlo (OKMC) method.
Lisal, Martin
DownloadedBy:[CanadianResearchKnowledgeNetwork]At:10:3327May2008 Simulation of chemical reaction, the relationship between the RxMC method and other techniques that simulate chemical reaction behaviour is given; reaction; equilibria; simulation 1. Introduction The behaviour of chemical reactions in highly non
Vishnevskiy, Yury V; Schwabedissen, Jan; Rykov, Anatolii N; Kuznetsov, Vladimir V; Makhova, Nina N
2015-11-01
Gas-phase structures of two isomers of dimethyl-substituted 1,5-diazabicyclo[3.1.0]hexanes, namely, 3,3-dimethyl- and 6,6-dimethyl-1,5-diazabicyclo[3.1.0]hexane molecules, have been determined by gas electron diffraction method. A new approach based on the Monte Carlo method has been developed and used for the analysis of precision and accuracy of the refined structures. It was found that at 57 °C 3,3-dimethyl derivative exists as a mixture of chair and boat conformers with abundances 68(8)% and 32(8)%, respectively. 6,6-Dimethyl-1,5-diazabicyclo[3.1.0]hexane at 50 °C has only one stable conformation with planar 5-ring within error limits. Theoretical calculations predict that the 6,6-dimethyl isomer is more stable in comparison to the 3,3-dimethyl isomer with energy difference 3-5 kcal mol(-1). In order to explain the relative stability and bonding properties of different structures the natural bond orbitals (NBO), atoms in molecules (AIM), and interacting quantum atoms (IQA) analyses were performed. PMID:26461037
Sampling from a polytope and hard-disk Monte Carlo
Sebastian C. Kapfer; Werner Krauth
2013-01-21
The hard-disk problem, the statics and the dynamics of equal two-dimensional hard spheres in a periodic box, has had a profound influence on statistical and computational physics. Markov-chain Monte Carlo and molecular dynamics were first discussed for this model. Here we reformulate hard-disk Monte Carlo algorithms in terms of another classic problem, namely the sampling from a polytope. Local Markov-chain Monte Carlo, as proposed by Metropolis et al. in 1953, appears as a sequence of random walks in high-dimensional polytopes, while the moves of the more powerful event-chain algorithm correspond to molecular dynamics evolution. We determine the convergence properties of Monte Carlo methods in a special invariant polytope associated with hard-disk configurations, and the implications for convergence of hard-disk sampling. Finally, we discuss parallelization strategies for event-chain Monte Carlo and present results for a multicore implementation.
2012-01-01
Background Computerized adaptive testing (CAT) is being applied to health outcome measures developed as paper-and-pencil (P&P) instruments. Differences in how respondents answer items administered by CAT vs. P&P can increase error in CAT-estimated measures if not identified and corrected. Method Two methods for detecting item-level mode effects are proposed using Bayesian estimation of posterior distributions of item parameters: (1) a modified robust Z (RZ) test, and (2) 95% credible intervals (CrI) for the CAT-P&P difference in item difficulty. A simulation study was conducted under the following conditions: (1) data-generating model (one- vs. two-parameter IRT model); (2) moderate vs. large DIF sizes; (3) percentage of DIF items (10% vs. 30%), and (4) mean difference in ? estimates across modes of 0 vs. 1 logits. This resulted in a total of 16 conditions with 10 generated datasets per condition. Results Both methods evidenced good to excellent false positive control, with RZ providing better control of false positives and with slightly higher power for CrI, irrespective of measurement model. False positives increased when items were very easy to endorse and when there with mode differences in mean trait level. True positives were predicted by CAT item usage, absolute item difficulty and item discrimination. RZ outperformed CrI, due to better control of false positive DIF. Conclusions Whereas false positives were well controlled, particularly for RZ, power to detect DIF was suboptimal. Research is needed to examine the robustness of these methods under varying prior assumptions concerning the distribution of item and person parameters and when data fail to conform to prior assumptions. False identification of DIF when items were very easy to endorse is a problem warranting additional investigation. PMID:22900979
NASA Astrophysics Data System (ADS)
Boblest, S.; Meyer, D.; Wunner, G.
2014-11-01
We present a quantum Monte Carlo application for the computation of energy eigenvalues for atoms and ions in strong magnetic fields. The required guiding wave functions are obtained with the Hartree-Fock-Roothaan code described in the accompanying publication (Schimeczek and Wunner, 2014). Our method yields highly accurate results for the binding energies of symmetry subspace ground states and at the same time provides a means for quantifying the quality of the results obtained with the above-mentioned Hartree-Fock-Roothaan method. Catalogue identifier: AETV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETV_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 72 284 No. of bytes in distributed program, including test data, etc.: 604 948 Distribution format: tar.gz Programming language: C++. Computer: Cluster of 1-˜500 HP Compaq dc5750. Operating system: Linux. Has the code been vectorized or parallelized?: Yes. Code includes MPI directives. RAM: 500 MB per node Classification: 2.1. External routines: Boost::Serialization, Boost::MPI, LAPACK BLAS Nature of problem: Quantitative modelings of features observed in the X-ray spectra of isolated neutron stars are hampered by the lack of sufficiently large and accurate databases for atoms and ions up to the last fusion product iron, at high magnetic field strengths. The predominant amount of line data in the literature has been calculated with Hartree-Fock methods, which are intrinsically restricted in precision. Our code is intended to provide a powerful tool for calculating very accurate energy values from, and thereby improving the quality of, existing Hartree-Fock results. Solution method: The Fixed-phase quantum Monte Carlo method is used in combination with guiding functions obtained in Hartree-Fock calculations. The guiding functions are created from single-electron orbitals ?i which are either products of a wave function in the z-direction (the direction of the magnetic field) and an expansion of the wave function perpendicular to the direction of the magnetic field in terms of Landau states, ?i(?,?,z)=Pi(z)?n=0NLtin?ni(?,?), or a full two-dimensional expansion using separate z-wave functions for each Landau level, i.e. ?i(?,?,z)=?n=0NLPni(z)?ni(?,?). In the first form, the tin are expansion coefficients, and the expansion is cut off at some maximum Landau level quantum number NL. In the second form, the expansion coefficients are contained in the respective Pni. Restrictions: The method itself is very flexible and not restricted to a certain interval of magnetic field strengths. However, it is only variational for the lowest-lying state in each symmetry subspace and the accompanying Hartree-Fock method can only obtain guiding functions in the regime of strong magnetic fields. Unusual features: The program needs approximate wave functions computed with another method as input. Running time: 1 min-several days. The example provided takes approximately 50 min to run on 1 processor.
NASA Astrophysics Data System (ADS)
Morales-Casique, E.; Briseño-Ruiz, J. V.; Hernández, A. F.; Herrera, G. S.; Escolero-Fuentes, O.
2014-12-01
We present a comparison of three stochastic approaches for estimating log hydraulic conductivity (Y) and predicting steady-state groundwater flow. Two of the approaches are based on the data assimilation technique known as ensemble Kalman filter (EnKF) and differ in the way prior statistical moment estimates (PSME) (required to build the Kalman gain matrix) are obtained. In the first approach, the Monte Carlo method is employed to compute PSME of the variables and parameters; we denote this approach by EnKFMC. In the second approach PSME are computed through the direct solution of approximate nonlocal (integrodifferential) equations that govern the spatial conditional ensemble means (statistical expectations) and covariances of hydraulic head (h) and fluxes; we denote this approach by EnKFME. The third approach consists of geostatistical stochastic inversion of the same nonlocal moment equations; we denote this approach by IME. In addition to testing the EnKFMC and EnKFME methods in the traditional manner that estimate Y over the entire grid, we propose novel corresponding algorithms that estimate Y at a few selected locations and then interpolate over all grid elements via kriging as done in the IME method. We tested these methods to estimate Y and h in steady-state groundwater flow in a synthetic two-dimensional domain with a well pumping at a constant rate, located at the center of the domain. In addition, to evaluate the performance of the estimation methods, we generated four unconditional different realizations that served as "true" fields. The results of our numerical experiments indicate that the three methods were effective in estimating h, reaching at least 80% of predictive coverage, although both EnKF were superior to the IME method. With respect to estimating Y, the three methods reached similar accuracy in terms of the mean absolute value error. Coupling the EnKF methods with kriging to estimate Y reduces to one fourth the CPU time required for data assimilation while both estimation accuracy and uncertainty do not deteriorate significantly.
Fissioning Plasma Core Reactor
NASA Technical Reports Server (NTRS)
Albright, Dennis; Butler, Carey; West, Nicole; Cole, John W. (Technical Monitor)
2002-01-01
Institute for Scientific Research, Inc. (ISR) research program consist of: 1.Study core physics by adapting existing codes: MCNP4C - Monte Carlo code; COMBINE/VENTURE - diffusion theory; SCALE4 - Monte Carlo, with many utility codes. 2. Determine feasibility and study major design parameters: fuel selection, temperature and reflector sizing. 3. Study reactor kinetics: develop QCALC1 to model point kinetics; study dynamic behavior of the power release.
Koh, Wonshill
2013-02-22
The light propagation in highly scattering turbid media composed of the particles with different size distribution is studied using a Monte Carlo simulation model implemented in Standard C. Monte Carlo method has been widely utilized to study...
Pokhrel, D; Badkul, R; Jiang, H; Estes, C; Kumar, P; Wang, F
2014-06-01
Purpose: Lung-SBRT uses hypo-fractionated dose in small non-IMRT fields with tissue-heterogeneity corrected plans. An independent MU verification is mandatory for safe and effective delivery of the treatment plan. This report compares planned MU obtained from iPlan-XVM-Calgorithm against spreadsheet-based hand-calculation using most commonly used simple TMR-based method. Methods: Treatment plans of 15 patients who underwent for MC-based lung-SBRT to 50Gy in 5 fractions for PTV V100%=95% were studied. ITV was delineated on MIP images based on 4D-CT scans. PTVs(ITV+5mm margins) ranged from 10.1- 106.5cc(average=48.6cc). MC-SBRT plans were generated using a combination of non-coplanar conformal arcs/beams using iPlan XVM-Calgorithm (BrainLAB iPlan ver.4.1.2) for Novalis-TX consisting of micro-MLCs and 6MV-SRS (1000MU/min) beam. These plans were re-computed using heterogeneity-corrected Pencil-Beam (PB-hete) algorithm without changing any beam parameters, such as MLCs/MUs. Dose-ratio: PB-hete/MC gave beam-by-beam inhomogeneity-correction-factors (ICFs):Individual Correction. For independent-2nd-check, MC-MUs were verified using TMR-based hand-calculation and obtained an average ICF:Average Correction, whereas TMR-based hand-calculation systematically underestimated MC-MUs by ?5%. Also, first 10 MC-plans were verified with an ion-chamber measurement using homogenous phantom. Results: For both beams/arcs, mean PB-hete dose was systematically overestimated by 5.5±2.6% and mean hand-calculated MU systematic underestimated by 5.5±2.5% compared to XVMC. With individual correction, mean hand-calculated MUs matched with XVMC by - 0.3±1.4%/0.4±1.4 for beams/arcs, respectively. After average 5% correction, hand-calculated MUs matched with XVMC by 0.5±2.5%/0.6±2.0% for beams/arcs, respectively. Smaller dependence on tumor volume(TV)/field size(FS) was also observed. Ion-chamber measurement was within ±3.0%. Conclusion: PB-hete overestimates dose to lung tumor relative to XVMC. XVMC-algorithm is much more-complex and accurate with tissues-heterogeneities. Measurement at machine is time consuming and need extra resources; also direct measurement of dose for heterogeneous treatment plans is not clinically practiced, yet. This simple correction-based method was very helpful for independent-2nd-check of MC-lung-SBRT plans and routinely used in our clinic. A look-up table can be generated to include TV/FS dependence in ICFs.
NASA Technical Reports Server (NTRS)
Ackerman, Thomas P.; Lin, Ruei-Fong
1993-01-01
The radiation field over a broken stratocumulus cloud deck is simulated by the Monte Carlo method. We conducted four experiments to investigate the main factor for the observed shortwave reflectively over the FIRE flight 2 leg 5, in which reflectivity decreases almost linearly from the cloud center to cloud edge while the cloud top height and the brightness temperature remain almost constant through out the clouds. From our results, the geometry effect, however, did not contribute significantly to what has been observed. We found that the variation of the volume extinction coefficient as a function of its relative position in the cloud affects the reflectivity efficiently. Additional check of the brightness temperature of each experiment also confirms this conclusion. The cloud microphysical data showed some interesting features. We found that the cloud droplet spectrum is nearly log-normal distributed when the clouds were solid. However, whether the shift of cloud droplet spectrum toward the larger end is not certain. The decrease of number density from cloud center to cloud edges seems to have more significant effects on the optical properties.
Sahu, Nityananda; Gadre, Shridhar R.; Bandyopadhyay, Pradipta; Miliordos, Evangelos; Xantheas, Sotiris S.
2014-10-28
We report new global minimum candidate structures for the (H2O)25 cluster that are lower in energy than the ones reported previously and correspond to hydrogen bonded networks with 42 hydrogen bonds and an interior, fully coordinated water molecule. These were obtained as a result of a hierarchical approach based on initial Monte Carlo Temperature Basin Paving (MCTBP) sampling of the cluster’s Potential Energy Surface (PES) with the Effective Fragment Potential (EFP), subsequent geometry optimization using the Molecular Tailoring fragmentation Approach (MTA) and final refinement at the second order Møller Plesset perturbation (MP2) level of theory. The MTA geometry optimizations used between 14 and 18 main fragments with maximum sizes between 11 and 14 water molecules and average size of 10 water molecules, whose energies and gradients were computed at the MP2 level. The MTA-MP2 optimized geometries were found to be quite close (within < 0.5 kcal/mol) to the ones obtained from the MP2 optimization of the whole cluster. The grafting of the MTA-MP2 energies yields electronic energies that are within < 5×10-4 a.u. from the MP2 results for the whole cluster while preserving their energy order. The MTA-MP2 method was also found to reproduce the MP2 harmonic vibrational frequencies in both the HOH bending and the OH stretching regions.
Quantum Monte Carlo Calculations for Carbon Nanotubes
Thomas Luu; Timo A. Lähde
2015-11-16
We show how lattice Quantum Monte Carlo can be applied to the electronic properties of carbon nanotubes in the presence of strong electron-electron correlations. We employ the path-integral formalism and use methods developed within the lattice QCD community for our numerical work. Our lattice Hamiltonian is closely related to the hexagonal Hubbard model augmented by a long-range electron-electron interaction. We apply our method to the single-quasiparticle spectrum of the (3,3) armchair nanotube configuration, and consider the effects of strong electron-electron correlations. Our approach is equally applicable to other nanotubes, as well as to other carbon nanostructures. We benchmark our Monte Carlo calculations against the two- and four-site Hubbard models, where a direct numerical solution is feasible.
Monte Carlo simulation of Alaska wolf survival
NASA Astrophysics Data System (ADS)
Feingold, S. J.
1996-02-01
Alaskan wolves live in a harsh climate and are hunted intensively. Penna's biological aging code, using Monte Carlo methods, has been adapted to simulate wolf survival. It was run on the case in which hunting causes the disruption of wolves' social structure. Social disruption was shown to increase the number of deaths occurring at a given level of hunting. For high levels of social disruption, the population did not survive.
Canonical Demon Monte Carlo Renormalization Group
M. Hasenbusch; K. Pinn; C. Wieczerkowski
1994-11-23
We describe a method to compute renormalized coupling constants in a Monte Carlo renormalization group calculation. It can be used, e.g., for lattice spin or gauge models. The basic idea is to simulate a joint system of block spins and canonical demons. Unlike the Microcanonical Renormalization Group of Creutz et al. it avoids systematical errors in small volumes. We present numerical results for the O(3) nonlinear sigma-model.
Monte Carlo Approach to M-Theory
Werner Krauth; Hermann Nicolai; Matthias Staudacher
1998-04-01
We discuss supersymmetric Yang-Mills theory dimensionally reduced to zero dimensions and evaluate the SU(2) and SU(3) partition functions by Monte Carlo methods. The exactly known SU(2) results are reproduced to very high precision. Our calculations for SU(3) agree closely with an extension of a conjecture due to Green and Gutperle concerning the exact value of the SU(N) partition functions.
Higher accuracy quantum Monte Carlo calculations of the barrier for the HH2 reaction
Anderson, James B.
of 10. As in the previous studies, the Green's function quantum Monte Carlo method with an ``exact configuration interaction method. The lowest-energy expectation value for the energy at the saddle point Monte Carlo methods.13 The energy at the saddle point in the barrier to reaction was previously
An Introduction to Multilevel Monte Carlo for Option Valuation
Higham, Desmond J
2015-01-01
Monte Carlo is a simple and flexible tool that is widely used in computational finance. In this context, it is common for the quantity of interest to be the expected value of a random variable defined via a stochastic differential equation. In 2008, Giles proposed a remarkable improvement to the approach of discretizing with a numerical method and applying standard Monte Carlo. His multilevel Monte Carlo method offers an order of speed up given by the inverse of epsilon, where epsilon is the required accuracy. So computations can run 100 times more quickly when two digits of accuracy are required. The multilevel philosophy has since been adopted by a range of researchers and a wealth of practically significant results has arisen, most of which have yet to make their way into the expository literature. In this work, we give a brief, accessible, introduction to multilevel Monte Carlo and summarize recent results applicable to the task of option evaluation.
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Canonical Demon Monte Carlo Renormalization Group
M. Hasenbusch; K. Pinn; C. Wieczerkowski
1994-06-27
We describe a new method to compute renormalized coupling constants in a Monte Carlo renormalization group calculation. The method can be used for a general class of models, e.g., lattice spin or gauge models. The basic idea is to simulate a joint system of block spins and canonical demons. In contrast to the Microcanonical Renormalization Group invented by Creutz et al. our method does not suffer from systematical errors stemming from a simultaneous use of two different ensembles. We present numerical results for the $O(3)$ nonlinear $\\sigma$-model.
Multiscale Monte Carlo equilibration: pure Yang-Mills theory
Michael G. Endres; Richard C. Brower; William Detmold; Kostas Orginos; Andrew V. Pochinsky
2015-10-15
We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Michael G. Endres; Richard C. Brower; William Detmold; Kostas Orginos; Andrew V. Pochinsky
2015-12-30
We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Spin-Orbit Interactions in Electronic Structure Quantum Monte Carlo
Melton, Cody; Guo, Shi; Ambrosetti, Alberto; Pederiva, Francesco; Mitas, Lubos
2015-01-01
We develop generalization of the fixed-phase diffusion Monte Carlo method for Hamiltonians which explicitly depend on particle spins such as for spin-orbit interactions. The method is formulated in zero variance manner and is similar to treatment of nonlocal operators in commonly used static- spin calculations. Tests on atomic and molecular systems show that it is very accurate, on par with the fixed-node method. This opens electronic structure quantum Monte Carlo methods to a vast research area of quantum phenomena in which spin-related interactions play an important role.
Calculating Pi Using the Monte Carlo Method
ERIC Educational Resources Information Center
Williamson, Timothy
2013-01-01
During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During…
Advanced Monte Carlo Methods: Direct Simulation
Mascagni, Michael
of Comets A Long-period Comet Described as a sequence of elliptic orbits Sun at one focus of the orbit ellipse Energy of a comet is inversely proportional to the length of the semi-major axis of the ellipse Simulation of the Lifetime of Comets (Cont.) Behavior of the Comet Most of the time Moves at a great distance
Stochastic Methods in Neuroscience Carlo Laing
Destexhe, Alain
" Abstract In the cerebral cortex of awake animals, neurons are subject to tremen- dous fluctuating activity, mostly of synaptic origin, termed "synaptic noise". Synaptic noise is the dominant source of membrane the integrative properties of neocortical neurons in the intact brain. 9.1 Introduction The cerebral cortex
Krylov-projected quantum Monte Carlo Method
Blunt, N. S.; Alavi, Ali; Booth, George H.
2015-07-31
for the 10-site Hubbard model with U/t = 1, and compared to near-exact dynamical Lanczos. Inset shows integrated weight, ? ? 0 A2(? ?)d??. Simulation pa- rameters were ? = 0.01, with a deterministic space of double excitations. larger-scale ab initio systems... of the Ne atom (Eh), comparing KP-FCIQMC with DMRG (using M=500 spin-adapted renormalised states for the larger basis)[41, 42]. ? = 0.001, na = 3 and a deter- ministic space of single and double excitations. density matrix renormalisation group (DMRG...
Evolutionary Monte Carlo Methods for Clustering
Wong, Wing Hung
-means clustering, variable selection) or a density (e.g., posterior from a Dirichlet process mixture model prior, K-means clustering, and the MCLUST algorithm. Key Words: Dirichlet process mixture model; Gibbs possible clusters. In some cases, such as the Bayesian Dirichlet process mixture models, this density
NASA Astrophysics Data System (ADS)
Kryeziu, D.; Tschurlovits, M.; Kreuziger, M.; Maringer, F.-J.
2007-09-01
Many metrology laboratories are dealing with activity measurements of different radionuclides with special interest in nuclear medicine as well as in radiopharmaceutical industry. In improving the accuracy of radionuclide activity measurements, a key role plays the calculation of calibration figures and the volume correction factors for the radionuclide under study. It is well known that the chamber calibration factors depend on the measurement geometry including the volume of the source and the type of the measurement vessel. In this work, the activity standards in the form of radioactive solutions are delivered in sealed Jena glass 5 ml FIOLAX ®-klar ampoule. Calculation of the calibration figures (or efficiencies) for 90Y, 125I, 131I and 177Lu radionuclides on 5 ml ampoule are presented in this paper. Additionally, their appropriate volume correction factors are determined. These calibration figures for the ISOCAL IV pressurized well re-entrant ionization chamber (IC) are pointed out based on the Monte-Carlo (MC) simulation method of such chamber using the PENELOPE-2005 MC computer simulation code. The chamber is filled with nitrogen gas pressurized to approximately 1 MPa. In determining the volume correction factors, the variation of calibration factors versus the mass of radioactive solution filling the 5 ml ampoule glass is investigated. From the point of view that impurity of 177 mLu isomer is always accompanying the 177Lu radionuclide, for making possible the correction due to presence of this impurity, the calibration factor and the volume correction factors for 177 mLu are reported as well.
Quantum Monte Carlo by message passing
Bonca, J.; Gubernatis, J.E.
1993-05-01
We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green`s function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.